
EduAsiaNews, Yogyakarta – The use of generative artificial intelligence technologies that do not adhere to ethical principles has the potential to undermine human dignity. In this context, the Indonesian government’s decision to temporarily block Grok AI has been viewed as an important momentum to more comprehensively discuss the direction of artificial intelligence (AI) governance.
A Professor of Applied Artificial Intelligence at Universitas Muhammadiyah Yogyakarta (UMY), Prof. Ir. Slamet Riyadi, S.T., M.Sc., Ph.D., regarded the government’s policy as a signal of the state’s seriousness in protecting the public from the risks of digital technology misuse.
“I see this as a positive step, although blocking is not a final solution. The policy demonstrates the state’s alignment with public interests in addressing the impacts and potential misuse of technology,” Slamet stated on Monday (2/2) at UMY.
Furthermore, Slamet explained that generative AI is a technology designed to produce content—such as text, images, audio, and video—through deep learning processes based on large-scale data.
“Applications such as Grok or ChatGPT are products of generative artificial intelligence. They operate using deep learning processes that leverage massive datasets. This capability is both their strength and their weakness, as the outputs produced are highly dependent on data quality and the prompts provided by users,” he explained.
Read also: Techno-Socio Approach Considered Key to Human-Centered AI Development
Regarding the effectiveness of blocking, Slamet assessed that such a policy is temporary and does not fully eliminate the possibility of misuse. Technically, access to AI applications may still be possible through certain means; however, for the general public, blocking is considered sufficiently effective in minimizing risks.
According to him, future AI security standards and governance frameworks must be designed comprehensively. Such an approach should not focus solely on technological aspects but also include enhancing user literacy and establishing firm, adaptive regulations in line with technological advancements.
“If we only look at the technological aspect, the issue of AI misuse will never be fully resolved. There must be an integrated approach involving technology, humans as users, and clear regulations,” Slamet emphasized.
The lecturer of the Information Technology Study Program at UMY’s Faculty of Engineering added that regulation plays a crucial role as the final safeguard in AI governance, including the enforcement of sanctions and strict prohibitions against unethical uses of AI.
Ultimately, generative AI is developed for the benefit of humanity. Therefore, the challenge ahead is not merely to limit its risks, but to ensure that artificial intelligence technologies are truly utilized to strengthen human values and uphold human dignity. (NF)





