Community

Community

Welcome to the ShanghAI Community

This is the Community Website for all who are interested in Artificial Intelligence (AI).

Browse the different Interest Groups to read about the most recent news of AI. Feel free to share your thoughts with the world and post them on the boards of your favourite Interest Groups. Subscribe to a group in order to post and comment your ideas and to be notified if something happened in a group. If you miss a certain Interest Group on the list, just create a new one.

Browse the Member List to find interesting people, add them as friends or send them messages.

Newest Posts in Interest Groups

Olivier Michel's picture
posted in Webots
1 year 7 months ago
 

We are organizing a robot programming challenge called "NAO race" based on Webots which may be interesting for Shanghai lecture students. Let me quickly introduce it...

The goal is to program a NAO robot in Python to walk (or run) as fast as possible on a 10 meters race. There no need to install Webots as it runs in the cloud from your web browser (Firefox or Chrome):

https://www.cyberbotics.com/nao_race

We are partnering on this even with The Construct (a new company offering robot simulations in the cloud). They will award the competitor achieving the best performance with a round-trip to Hawaii!

http://contest.theconstructsim.com

It's a great opportunity to demonstrate a nice robotics achievement.
The contest is open now and will close on December 31th.

Cheers,

-Olivier


Bohan Liu's picture
posted in ShanghAI Lectures 2015
1 year 7 months ago
 

Afterthoughts on the "can we prevent AIs from being evil" question put forward by UC3M:

I don't think that is exactly the proper question. It's quite difficult to draw out the border between evil and good on every subject. If terrorists design an AI to spread horror, that's no doubt evil. But what if big companies like Google, Facebook, Microsoft etc. create AIs to automatically collect and analyze user data for their business, which is pretty close to the reality today. Will it be an evil thing if there's an AI online to ban people from expressing racism or violence or rumors(something like making up fake news of an serious earthquake)? Does that contradict with speech freedom? And, what will an AI programmed to never be evil do, if it sees humans doing something "evil"? Will they decide that humans are the ultimate source of evil and the best solution is to wipe out us for the graeter good?

Let's take some of Turing's insights and avoid endlessly debating over what's evil and what's human nature. Just think about this: is it possible to prevent AIs from going beyond human control? I believe this is the most important issue. Debates within the U.S. Air Force argued whether America's next generation of nuclear bombers should be fully automonous. As a result they decide to still put future bombers under human pilots' control. That can inspire us. Terroists using AIs is a bad thing, but at least we can still fight them in that case. But what if even the most evil people get overthrown by their own evil AIs or more seriously, become victims of these AIs(humans end up controlled by thier weapons)? That, will be absolute chaos. So I think it's always up to us to ultimately decide what is an evil thing, never the machines. Thus that question arises: can AIs always absolutely obey humans' orders?

This problem is also more practical, I think, as there are scientific ways to tackle it. We can explore for the boundary of control systems, and also methods to detect failures(since bugs might have surprising results).

Do you agree?


Bohan Liu's picture
posted in Evolving Robot Explorers
1 year 7 months ago
 

My summaries:
Tasks of our robots:
1. Drill through the ice layers.
2. Sample and analyze materials(if there indeed are things beneath the ice).
3. Get back to the surface.
4. Transmit results to Earth(or probes orbiting that astronomical object)

Restraints of design:
1. We have to keep them small and light in order to successfully launch them to the destinations.
2. Communication latency with Earth will be too serious, so they must be fully automonous and solve any issuses without human help.

Our challenges:
1. How do we dig through hundreds kilometers of ice(and then return from beneath)?
2. Where do our robots pull energy from?

My ideas:
We deploy a swarm of small, basic "block" robots and let them evolve and collaborate over a fairly long period to get the work done. But I'm still thinking what things(basic rules) should we preprogram them with. Any other ideas?


Bohan Liu's picture
posted in ShanghAI Lectures 2015
1 year 7 months ago
 

Do you think Terminators are a kind of AI? I mean they can blend into human society and humans aren't aware(until things blow up), deploy tactics(even collaborative ones), adjust to their current states(if they lose their legs, they know to crawl with their arms). But all in all, they are just programmed for a single purpose(to protect the Connor's or kill the Connor's or kill any human). And they don't have emotions, at least not initially. If we humans build something like them, can we consider them as AIs? And if we create something like Skynet and Skynet builds these machines(as in the movies), how should we define them? AI-made-AIs?


Bohan Liu's picture
posted in ShanghAI Lectures 2015
1 year 8 months ago
 

Is...this the place where we students discuss? I don't see any other students right now :-0


Newest Members

Creative Commons License