Afterthoughts on the "can we prevent AIs from being evil" question put forward by UC3M:
I don't think that is exactly the proper question. It's quite difficult to draw out the border between evil and good on every subject. If terrorists design an AI to spread horror, that's no doubt evil. But what if big companies like Google, Facebook, Microsoft etc. create AIs to automatically collect and analyze user data for their business, which is pretty close to the reality today. Will it be an evil thing if there's an AI online to ban people from expressing racism or violence or rumors(something like making up fake news of an serious earthquake)? Does that contradict with speech freedom? And, what will an AI programmed to never be evil do, if it sees humans doing something "evil"? Will they decide that humans are the ultimate source of evil and the best solution is to wipe out us for the graeter good?
Let's take some of Turing's insights and avoid endlessly debating over what's evil and what's human nature. Just think about this: is it possible to prevent AIs from going beyond human control? I believe this is the most important issue. Debates within the U.S. Air Force argued whether America's next generation of nuclear bombers should be fully automonous. As a result they decide to still put future bombers under human pilots' control. That can inspire us. Terroists using AIs is a bad thing, but at least we can still fight them in that case. But what if even the most evil people get overthrown by their own evil AIs or more seriously, become victims of these AIs(humans end up controlled by thier weapons)? That, will be absolute chaos. So I think it's always up to us to ultimately decide what is an evil thing, never the machines. Thus that question arises: can AIs always absolutely obey humans' orders?
This problem is also more practical, I think, as there are scientific ways to tackle it. We can explore for the boundary of control systems, and also methods to detect failures(since bugs might have surprising results).
Do you agree?