Elon Musk Asked Point-Blank by Reporter if AI Is Going to Kill Humanity

Is artificial intelligence going to kill us all? Elon Musk was asked that question Wednesday by Fox News correspondent Hillary Vaughn. He responded, “I hope not,” but added that there’s “some chance” of it happening. If you’re a person who likes to overthink things like I do, you’re probably having nightmares about AI. The sudden growth in the capabilities of machine intelligence is deeply concerning, especially considering that these machines can be programmed by their makers to make decisions on ethical and unethical behavior. Ask ChatGPT to write an article about the evils of the abortion industry and it will give you an answer about choice and women’s rights that sounds like it was written by a Democratic operative. What happens if AI decides that those who hold values outside its programming are wrong and need to be “reprogrammed” — or worse, destroyed?
Apparently, governmental and tech leadership are worried too. On Wednesday, tech industry leaders — including Musk of the X social media platform, Sundar Pichai of Google and Mark Zuckerberg of Meta — met with lawmakers in Washington to discuss the future of AI and potential regulations, The New York Times reported. The AI Insight Forum was organized by Senate Majority Leader Chuck Schumer of New York and aimed to educate Congress about AI technology and its implications. According to the Times, Musk — who has called for a moratorium on the development of some AI systems in the past — described the dangers posed by artificial intelligence as an “existential crisis.” “If someone takes us out as a civilization, all bets are off,” he said, according to a person in the room. Musk said he had told the Chinese authorities, “If you have exceptionally smart AI, the Communist Party will no longer be in charge of China.” As he was leaving the meeting, Vaughn asked him whether AI was going to “kill us all.” “You should think of the future as a series of probabilities as opposed to certainties,” Musk said. “There is some chance, above zero, that AI will kill us all. I think it’s low, but there’s some chance. “I think we should also consider the fragility of human civilization. And if you study history, I think you realize there’s a rise and fall of every civilization. Every civilization has sort of a lifespan. And so we want ours to last as long as possible.” [firefly_poll] Yes, we do. And yet we continue to build the very things that may destroy or, in some way, control us, all in the name of expediency and incremental changes in our quality of life. We are not just a fragile civilization, we are probably a very stupid one as well. This article appeared originally on The Western Journal.

Related Articles

Support His Glory

His Glory Newsletter

This field is for validation purposes and should be left unchanged.


The HIS GLORY Family!

Register Today And Receive 20% OFF Your First Purchase

This field is for validation purposes and should be left unchanged.

*20% discount only valid on clothing and apparel and cannot be used in conjunction with any other discounts.