ShanghAI Lectures 2015

ShanghAI Lectures 2015

Interest Group Board Entries

Bohan Liu's picture
1 year 9 months ago
 

Afterthoughts on the "can we prevent AIs from being evil" question put forward by UC3M:

I don't think that is exactly the proper question. It's quite difficult to draw out the border between evil and good on every subject. If terrorists design an AI to spread horror, that's no doubt evil. But what if big companies like Google, Facebook, Microsoft etc. create AIs to automatically collect and analyze user data for their business, which is pretty close to the reality today. Will it be an evil thing if there's an AI online to ban people from expressing racism or violence or rumors(something like making up fake news of an serious earthquake)? Does that contradict with speech freedom? And, what will an AI programmed to never be evil do, if it sees humans doing something "evil"? Will they decide that humans are the ultimate source of evil and the best solution is to wipe out us for the graeter good?

Let's take some of Turing's insights and avoid endlessly debating over what's evil and what's human nature. Just think about this: is it possible to prevent AIs from going beyond human control? I believe this is the most important issue. Debates within the U.S. Air Force argued whether America's next generation of nuclear bombers should be fully automonous. As a result they decide to still put future bombers under human pilots' control. That can inspire us. Terroists using AIs is a bad thing, but at least we can still fight them in that case. But what if even the most evil people get overthrown by their own evil AIs or more seriously, become victims of these AIs(humans end up controlled by thier weapons)? That, will be absolute chaos. So I think it's always up to us to ultimately decide what is an evil thing, never the machines. Thus that question arises: can AIs always absolutely obey humans' orders?

This problem is also more practical, I think, as there are scientific ways to tackle it. We can explore for the boundary of control systems, and also methods to detect failures(since bugs might have surprising results).

Do you agree?



Bohan Liu's picture
1 year 9 months ago
 

Do you think Terminators are a kind of AI? I mean they can blend into human society and humans aren't aware(until things blow up), deploy tactics(even collaborative ones), adjust to their current states(if they lose their legs, they know to crawl with their arms). But all in all, they are just programmed for a single purpose(to protect the Connor's or kill the Connor's or kill any human). And they don't have emotions, at least not initially. If we humans build something like them, can we consider them as AIs? And if we create something like Skynet and Skynet builds these machines(as in the movies), how should we define them? AI-made-AIs?


Martin F. Stoelen's picture
1 year 9 months ago
 

Cool topic Bohan, and quite relevant right now. My opinion: Let's rather focus on useful and user-friendly (in every aspect) robots and AI systems that can solve our current and future problems. There are plenty of good problems ;) Anyone else have thoughts on this?


Bohan Liu's picture
1 year 9 months ago
 

Yeah, I totally agree that we should be working on peaceful robots, I used Teminators as an example because I just watched the movie and thought about this. The core of my topic is whether we could consider programs/robots that can only do one task as an AI(yet this single task might involve unpredictable and dynamic challenges, say, an advanced autopilot program for airplanes. Their only task is flight-control, which means they don't need to take care of problems such as passenger complaints, but will need to solve airplane system failures, steer through bad weather, etc)
Speaking of "single" tasks, I've met problems defining the term. Creating a robot which can write alphabetic letters with a pencil is one thing, creating one which can create and write English poems wtih a pencil is a another.However you can always say both "writing letters" and ""writing poems" are "single tasks", since these purposes are clear enough.
I also wonder if there are a set of basic-particle-like functions that we can build anything with. Tasks such as flight control and poem writing all have thier own subset of smaller tasks. Is it possible to find the set of ultimately basic tasks that, if used in a right way, can perform anything we want(similar to how we can use the Bare Bones Language to create all possible programs). Besides, this somehow seems to fit with the divide-and-conquer method. When thinking of this there also comes the long argued thing of whether anything can be solved by computing. But this one might be a bit too far from our current lectures.
So I'm popping with 3 problems now: can we call a single-tasked(but the task is complex and requires some sort of "intelligence" to accomplish) program/machine an AI, how can we define what a "single" task is, and if we can break functions into small enough pieces that we can create anything possible from?
It will certainly be a good thing this time if we can create fully automatic droids for disaster surveillance and outer planet exploring. Remote control faces latency and even disruption here, and the envionments are too risky for humans, so there's a demand for these robots. In the coastal provinces of China struck by recordbreaking superstrong typhoons or in those mountainous provinces hit by serious earthquakes these years, satellites sights might get blocked by clouds, and it's obviously dangerous to deploy human piloted choppers right into the storms or through wind-blowing valleys to assess and rescue. Losses caused by disasters are heavy, yet modern UAVs aren't smart enough for those critical missions at this moment. I think AI technologies will help.
Meanwhile I'll check more things on the 3 topics and post any new discoveries or problems :)


Bohan Liu's picture
1 year 10 months ago
 

Is...this the place where we students discuss? I don't see any other students right now :-0


Nathan Labhart's picture
1 year 10 months ago
 

I guess students will join once the Kōans start, i.e., after Lecture 3 :-)


Verena Hafner's picture
1 year 10 months ago
 

Can we use this group for interactions during the lectures?


Nathan Labhart's picture
1 year 10 months ago
 

Yes, that is one option. I just suggested a Drupal chat module to Martin, maybe he can add it to the site… it looks similar to the chat we had some years ago.


Martin F. Stoelen's picture
1 year 10 months ago
 

We have now installed DrupalChat, so can use this and/or this group for interactions.


Creative Commons License