

Renowned physicist Stephen Hawking had strongly cautioned that rapid advancements in artificial intelligence could pose serious risks to humanity. He warned that once AI systems surpass a certain threshold, they may begin redesigning themselves at an unprecedented pace, outstripping human capabilities. Given the slow pace of human evolution compared to fast-improving algorithms, AI could potentially dominate global systems. Hawking also expressed deep concern over autonomous weapons, warning that an arms race in such technologies could trigger devastating global crises.
At the same time, Hawking acknowledged that AI holds immense potential if properly controlled. He believed it could eradicate poverty, cure complex diseases, and reverse environmental damage. However, he stressed that misaligned AI goals could threaten human survival if not regulated carefully. In his final writings, he urged governments and scientists to enforce strict controls on AI development before it exceeds human control. He also highlighted the economic risks, noting that automation could eliminate middle-class jobs and widen inequality if safeguards are not implemented.













Comments (0)
No comments yet
Be the first to comment!