5G Investment News
  • Top News
  • Economy
  • Forex
  • Investing
  • Stock
  • Editor’s Pick
No Result
View All Result
5G Investment News
  • Top News
  • Economy
  • Forex
  • Investing
  • Stock
  • Editor’s Pick
No Result
View All Result
5G Investment News
No Result
View All Result
Home Stock

The Papal guidelines on artificial intelligence: Looking at the dangers

by
September 30, 2025
in Stock
0
The Papal guidelines on artificial intelligence: Looking at the dangers
STOCK PHOTO | Image by DC Studio from Freepik

(Part 2)

It should be no surprise that despite all the numerous benefits to society we enumerated, there are also numerous social costs that Artificial Intelligence (AI) brings with it. It was no different during the First Industrial Revolution that brought with it many serious harms to society (e.g., inhuman treatment of workers, monopoly practices, cutthroat competition, environmental pollution, moral decadence, etc.). Different political systems tried to address the social harms brought about by the First Industrial Revolution and the subsequent ones through appropriate controls and regulations.

A good number of political leaders turned for moral guidelines to the social teachings found in the papal encyclicals that were addressed, not only to Catholics or Christians, but to all “men and women of goodwill.” We can assume that the same learning process can be expected as the world confronts the possible harms that AI and other technologies associated with Industrial Revolution 4.0 will bring with them.

Again, we turned to ChatGPT for a summary of AI’s potential harms to society:

1. Job Displacement and Economic Inequality. Automation replaces human labor, especially in repetitive or low-skill jobs (e.g., factory work, data entry, call centers); widening inequality as high-tech industries grow while others shrink; and skills gaps develop as many workers are unprepared for the more knowledge-intensive jobs that AI creates. Specific example: Self-checkout machines and delivery drones reduce the need for cashiers and drivers.

2. Bias and Discrimination. AI models trained on biased data can produce racist, sexist, or unfair outcomes; there can be discrimination in hiring, lending, policing, and healthcare. Specific example: Facial recognition systems have been shown at times to misidentify black and Asian faces more often than white faces.

3. Loss of Privacy and Mass Surveillance. AI enables real-time facial recognition, location tracking, and data profiling; governments (as in China) and corporations can use AI to monitor behavior without consent. Specific example: China’s social credit system uses AI to track citizens’ activities and behavior.

4. Autonomous Weapons and Warfare. AI is being integrated into lethal autonomous weapons (drones, robots) without human oversight; this raises concerns about war crimes, escalation, and lack of accountability. Specific example: “Killer robots” could make life-and-death decisions without human intervention.

5. Misinformation and Manipulation. AI can generate deepfakes, fake news, and propaganda at scale; it can be used to manipulate public opinion during elections or social movements (this was already rampant during the last Philippine elections). Specific example: AI-generated political videos or fake voices can mislead the public.

6. Loss of Human Agency and Autonomy. An over-reliance on AI can erode critical thinking, decision-making, and freedom of choice; people may defer to algorithms even when they are wrong or harmful. Specific example: Social media algorithms influence what you see, think, and believe — without your realizing it.

7. Legal and Ethical Uncertainty. AI decisions (e.g., in hiring, insurance, or criminal justice) are often non-transparent. Who is responsible when an AI system makes a harmful mistake? Specific example: An AI-driven car crashes — who is liable: the company, the programmer, or the user?

8. Environmental Impact. Training large AI models consumes huge amounts of energy. It contributes to carbon emissions and environmental degradation. Specific example: Training GPT-3 was estimated to emit as much CO2 as five cars over their lifetime.

9. Social Fragmentation and Polarization. AI-curated content (e.g., on TikTok or Facebook) can amplify echo chambers, hate speech, and extremism; it can make societies more divided and distrustful. Specific example: Social media algorithms prioritize content that triggers strong emotional responses — often outrage.

To summarize, societal harms of AI can include jobs losses, economic equality, and skills gap for the economy; unfair treatment biased on race, gender, or class; loss of privacy through surveillance, data misuse, and loss of consent; autonomous weapons, AI in military applications; deepfakes, fake news, and election interferences; reduced critical thinking as a result of overdependence on algorithms; unclear personal responsibility or lack of legal accountability; high energy use and carbon emissions from AI training; breakdown of social cohesion because of echo chambers, polarization, and extremism.

These social harms that can be inflicted by AI are not inevitable if the appropriate controls and regulations are in force. It will be necessary to have general moral and ethical guidelines that will be the bases for such controls and regulations coming from the State and from private initiatives of industry and business organizations themselves.

As was the case with the First Industrial Revolution, a good number of the laws, controls, and regulations that prevented or at least minimized the social costs were inspired by the moral guidelines that were contained in Rerum Novarum of Pope Leo XIII, such as those related to the setting of minimum wages, the role of labor unions, the prevention of child labor, etc.

It is, therefore, providential that from the very beginning of his papacy, Pope Leo XIV has already announced that the Magisterium of the Catholic Church under his care will be pro-active in giving moral and ethical guidelines related to the new technologies introduced by the Fourth Industrial Revolution.

True to his promise on the day of his election, Pope Leo XIV wasted no time in coming out with clear ethical guidelines on the use of Artificial Intelligence. The following is a summary of his key guidelines issued between May and July this year:

1. AI as a tool, not a substitute for humanity. He emphasizes that despite AI being a “exceptional product of human genius,” it must always remain a tool and never replace or diminish human dignity or fundamental freedoms (from Vatican News). Notably, he told world leaders: “Artificial Intelligence functions as a tool for the good of human beings — not to diminish them, not to replace them (Catholic News Agency).

2. Protecting youth and nurturing true wisdom. In a message to the AI and Ethics conference in Rome (held last June), he warned about AI’s possible negative effects on the intellectual, neurological, and spiritual development of children and young people (Catholic News Agency). He asserted that access to extensive data should not be mistaken for intelligence, which is grounded on openness to life’s deeper questions and commitment to truth and goodness. (Catholic News). He urged that “Our youth must be helped, and not hindered, in their journey toward maturity and true responsibility.”

3. Fostering ethical governance and the common good. Addressing a summit in Geneva (last July), he pointed out that ethical responsibility lies both with developers and users: “Although responsibility for the ethical use of AI begins with those who develop, manage, and oversee them, those who use them also share in this responsibility.” He underlined the need for regulatory frameworks and ethical management centered on the human person, beyond mere efficiency or utility. (Vatican News)

4. Advancing peace, dialogue, and integral human development. In a message tied to the World Summit on AI (Geneva), he conveyed that AI must help build “more human order of social relations,” “peaceful and just societies,” integral human development, and fraternity rather than fostering conflict. He warned that AI lacks moral discernment and cannot form genuine relationships, so its development must be accompanied by discernment, respect for human values, and conscience-based judgment.

5. Evaluating AI via a “Superior Ethical Criterion.” He insisted that the benefits or risks of AI should be assessed by how well it supports the integral development of the human person and society, including material, intellectual, and spiritual well-being. He further warned of a societal “loss — or at least an eclipse — of the sense of what is human,” urging deeper reflection on our shared human dignity.

(To be continued.)

Bernardo M. Villegas has a Ph.D. in Economics from Harvard, is professor emeritus at the University of Asia and the Pacific, and a visiting professor at the IESE Business School in Barcelona, Spain. He was a member of the 1986 Constitutional Commission.

bernardo.villegas@uap.asia

Previous Post

Lorenzo Shipping seeks SEC nod to raise capital stock

Next Post

Safeguarding heritage a focus as NCCA kicks off Museums and Galleries Month

Next Post
Safeguarding heritage a focus as NCCA kicks off Museums and Galleries Month

Safeguarding heritage a focus as NCCA kicks off Museums and Galleries Month

Enter Your Information Below To Receive Free Trading Ideas, Latest News And Articles.







    Fill Out & Get More Relevant News





    Stay ahead of the market and unlock exclusive trading insights & timely news. We value your privacy - your information is secure, and you can unsubscribe anytime. Gain an edge with hand-picked trading opportunities, stay informed with market-moving updates, and learn from expert tips & strategies.
    Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!

    Recommended

    A new ‘home’ and helpline begin this October: BingoPlus Foundation embraces love and bridges hope by advancing Mental Healthcare for every Filipino

    A new ‘home’ and helpline begin this October: BingoPlus Foundation embraces love and bridges hope by advancing Mental Healthcare for every Filipino

    October 1, 2025
    Franchise Negosyo Para sa Region V (Legazpi) opens on Friday at SM City Legazpi

    Franchise Negosyo Para sa Region V (Legazpi) opens on Friday at SM City Legazpi

    October 1, 2025
    Overcoming early rejections in the food industry

    Overcoming early rejections in the food industry

    October 1, 2025
    BSP sees September inflation at 1.5%-2.3%

    BSP sees September inflation at 1.5%-2.3%

    October 1, 2025

    Disclaimer: 5GInvestmentNews.com, its managers, its employees, and assigns (collectively “The Company”) do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice.
    The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.

    • Privacy Policy
    • Terms & Conditions

    Copyright © 2024 5GInvestmentNews. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Privacy Policy
    • suspicious engagement
    • Terms & Conditions
    • Thank you

    © 2025 JNews - Premium WordPress news & magazine theme by Jegtheme.