Comments on: Dispelling the Killer AI Myth https://insidebigdata.com/2016/01/04/whats-wrong-with-the-killer-ai-fear/ Your Source for AI, Data Science, Deep Learning & Machine Learning Strategies Mon, 03 Apr 2023 21:45:08 +0000 hourly 1 https://wordpress.org/?v=6.3.2 By: Ken Oosting https://insidebigdata.com/2016/01/04/whats-wrong-with-the-killer-ai-fear/#comment-164041 Wed, 18 Jul 2018 02:35:23 +0000 http://insidebigdata.com/?p=14165#comment-164041 I never liked the term “AI” or “Artificial Intelligence.” I have been on the leading edge of robotics, machine vision and what is now called machine learning since the mid 80s. Much of my work has been secret and therefore unpublished. Even so, there is sufficient public information to prove my lengthy and deep experience in this field. Daniel Gutierrez is correct in his assessment of the threat from AI. There is no evidence of any real threat from AI. I would go even further and say there is no evidence of real intelligence in AI. Sci-fi is fun. Try to remember that the “fi” stands for fiction. https://www.linkedin.com/in/kenneth-w-oosting-7a663a6/

]]>
By: Pierre Picard https://insidebigdata.com/2016/01/04/whats-wrong-with-the-killer-ai-fear/#comment-83847 Tue, 26 Apr 2016 14:17:07 +0000 http://insidebigdata.com/?p=14165#comment-83847 Generaly danger comes from our own stupidity, not from the intelligence of others. Instead of being afraid of an hypothetical threat in the not so near future we should be concerned today by the development of military robots for instance. Does any such robot really need to be intelligent to be a threat for human life and for peace?

]]>
By: Daniel Gutierrez https://insidebigdata.com/2016/01/04/whats-wrong-with-the-killer-ai-fear/#comment-74418 Wed, 06 Jan 2016 19:20:32 +0000 http://insidebigdata.com/?p=14165#comment-74418 In reply to sheryl clyde.

But your point is equivalent to hacking; someone purposefully making changes to code for evil intent. That’s is not sentience, or Killer AI. That is human interference. Your reference to “true AI” exists only in SciFi.

]]>
By: Daniel Gutierrez https://insidebigdata.com/2016/01/04/whats-wrong-with-the-killer-ai-fear/#comment-74417 Wed, 06 Jan 2016 19:17:49 +0000 http://insidebigdata.com/?p=14165#comment-74417 In reply to David McAllester.

The conflict of interest excuse is kind of worn. It is used all the time, like with climate change researchers; oh they’re just raising concerns to pad their grants. I know too many machine learning researchers to know they’re not hiding Killer AI concerns to protect their research funding. But you seem to have done original research in the field. As an expert, please explain how a piece of R or Python code and suddenly become self-aware and start taking over. Sentience is a huge leap of faith.

]]>
By: David McAllester https://insidebigdata.com/2016/01/04/whats-wrong-with-the-killer-ai-fear/#comment-74366 Wed, 06 Jan 2016 11:02:14 +0000 http://insidebigdata.com/?p=14165#comment-74366 There is an inherent conflict of interest in AI researchers acknowledging concerns about possible success in AI. Also, there is a strong tendency to say that something can’t be done when you personally don’t see how to do it. The people who were always most vocal in predicting the end of Moore’s law where the lithography engineers themselves. Most engineers did not personally see how to get over the next hurdle. As a reasonably prominent AI researcher myself, I think it is hubris to claim to know what cannot be done.

]]>
By: sheryl clyde https://insidebigdata.com/2016/01/04/whats-wrong-with-the-killer-ai-fear/#comment-74355 Wed, 06 Jan 2016 09:12:18 +0000 http://insidebigdata.com/?p=14165#comment-74355 You are neglecting a couple of key points. One AI is much more then just data analytics and neural networks. Two no your code may not cause it to jump the tracks but someone else’s that wants it to do just that and cause problems could.So at present the danger lies not in the AI but the one’s writing the code.

If true AI were achieved then you would have a problem as it would write it’s own code, it would no longer be controllable. It would not be limited to a body and It would have control of a lot of things that computers run now. We are not close to true AI right now which would end up being much more then the present day deep learning, machine learning, neural networks, cognitive computing and natural language processing we have today.

]]>