top of page

Discussions

          This was my first time building a robot so it was all new experience to me, but I enjoyed working with my group and completing this project. Throughout the project, I learned the difficulties of building a robot to function the way we want. For example, even just trying to keep our robot inside the arena was difficult at first. Also, even when it worked fine on the simulation on the Open Roberta Lab, when we tried it with the robot we built, it sometimes did not work. In class, we learned about various robots and AIs that works in different fields. Those robots were very complicated, it was hard to believe some of the thing robots can do. The most surprising one to me was the AI created by Google and how was able to make phone calls sounding just like humans. Although it was not from a scratch, by building a robot by myself, I got a brief feeling of how it is like for researchers to build a robot and how hard that is.

I do not think that our robot is very smart because it can only do simple things such as trying to stay in the circle of the arena or push the other robots when the sensor detects it. In order for a robot to be considered intelligent, I think it needs to be able to think and move with its own decision. Usalapin does not have a mind because it only moves as how we programmed it. I think mind is something that comes from the inner part and it cannot be created. Therefore, after doing this project, I felt that it is very difficult to create a robot to have its own mind.

The most difficult part of this project was making small changes after the fights. We finished building the robot early, so through fighting with other teams’ robots, we tried to improve them. However, core part has already been completed, we could not do much unless we change the structure completely. We liked our rabbit design of the robot, so we tried to keep the initial design as possible and therefore we were unable to make major changes.

In one of the classes, we had a debate whether AIs will surpass human in the future and I was on the negative side. However, by doing this project, it made me change my opinion. In terms of abilities, I think Artificial Intelligence will surpass humans. In some cases, robots are already better than humans and I think robots’ abilities keep improving in the future. It is not that I thought building a robot was easy before, but I became more aware of difficulties in building one. I did not believe that robots could have a mind but even after doing this project, I still think that it is impossible to build a robot to have its own mind even in the future.

Overall, I enjoyed working on this project and I learned many things from building a robot to programming. Throughout the project, it made me think about the things we learned and discussed in class and I got better understanding of them.

-Kaori Niki

          Until taking M&M class, I’d thought programming was very difficult in that we must have many skills for programming. However, I experienced programming with my team members and it changed my idea of programming. It is certain that programming is difficult, but it’s not because programming requires many skills. Even those who have no experience of programming can enjoy it easily, by using some tools like the system, ‘Open Roberta Lab’. Rather, the factor of its difficulty is its ‘integrity’. Robots can move perfectly as we programmed except some situations like that robots are out of order. It sounds like literally perfect, but from another perspective, its property become a defect; fundamentally robots don’t have flexibility, that is, they cannot do anything but our instruction contents. I was very confused about that. For example, in Sumo Competition, our robot turned left though its enemy was on its right. It was just because we instructed so, but if a human were in its place, he would never do such a stupid thing. I was not aware of this defect until this class. Maybe someone says that it can’t think for itself because we don’t educate it thinking. Certainly, AI learns a lot of thing, develops its own abilities, thinks, and acts on its own by programming. He will go on to insist that robots can become truly perfect by perfect programming. However, his indication loses an important sight. If robots can think on their own by programming, it is just in the range of program. After all, they just follow our instructions. They can’t exceed the limitation of program. Therefore, paradoxically, robots are perfect, but imperfect.

 

When I realized this, I doubted one thing that I had ever thought was very natural. Now we humans have some opportunities to choose. Then, we choose something freely. However, do we really choose freely? Taking robots into consideration, I suspect we may also follow the program. When we choose something, we may be made to consider a lot of factors, examine, determine which factor to put importance on, and finally choose one thing all by programming. Even if we select randomly, the selection usually has some factors; just we don’t notice that. We are affected by our surroundings unconsciously. In addition, if we can really select randomly, it is not the evidence to deny my idea because programming is very good at the random selection. Possibly, our behaviors may be all determined beforehand actually by programming.

 

As robots acquire more and more abilities, the difference between robots and humans is becoming smaller and smaller. It means not only that robots become like humans, but also that we feel different from what we have thought humans are. We believe and hope that we have something peculiar to humans, but now we threaten this belief. We have developed robotics and robots have gotten a lot of skills. Robots play many important roles in today’s human society. Ahead of the development, we may discover our own program.​

-Soichiro Asano

          Before this class, I strongly believed that robots were not intelligent, and after creating one with my team mates, my belief didn’t change at all. In fact, I think it all depends on what you mean by intelligence. Our Robot is able to do a lot of things. It can detect, front to back, the variations of light on the ground, and react depending on it. We designed our program to make sure that our robot will never go out of the sumo-ring by itself. Our robot is also able to seek (turning around itself) and detect using an ultrasonic sensor, the presence of an opponent and push the said opponent as much as it can, out of the ring (without getting itself out of the ring). I think it’s pretty impressive to imagine that we were able to create such set of behaviors, and I can understand how one could think that our robot is intelligent. Additionally, I have to say that the design of the body itself is pretty intelligent in the context of a sumo fight. The body is long, with a center of gravity close to the ground, making our robot hard to turn over and permitting great strength; the ears are able to protect a bit the poi. 

 

Even if I am proud of Usalapin, regarding its design whether it is about the program or the body structure, I really don’t believe we can designate Usalapin to be intelligent (humanly wise). First, AI, are an extension of humanity. We reproduce through them, dynamics and processes that we know and understand. If we can’t, amongst the scientific community, establish a consensual definition of what intelligence is, it means that we don’t really understand what intelligence is. So how, would you create something based on elements you don’t understand and define? How can you reproduce the dynamics of intelligence if you don’t know what intelligence is? Our program is actually very simple, because we were able to create it, with few programming knowledges. Wouldn’t it be very pretentious to say that we created intelligence? 

 

Secondly, to be intelligent, I think you need to be able to gather information on your environment and react depending on it. In fact, our robot seems to do that, but it’s not really the case; Usalapin collects information of its environment and then inputs the data in the program to behave accordingly in the vicinity of what the program allows to do. De facto, there are a lot of situations that are not well handled by the robot. For instance, once the robot detects an enemy, it should drive forward at full speed to push the enemy out of the ring. When the front light sensor detects the white line of the ring, the robot will go backward not to leave the ring. Nevertheless, if the opponent is too light, the robot can go too fast and go out of the ring without detecting the white line. Also, our robot has no memory and can’t adapt and modify the program itself to get the optimal strategy. But yet again, even if our robot was able to do all of that -it’s just a lack of detectors- (i.e.modify and improve its own program, adapt its behavior depending on varying environmental cues, etc.), would it be intelligence or a mere reproduction of what we think intelligence is? I believe in the latter proposition, I think AI are just a tasteless copy of human intelligence. However, for a robot, in a robot/non-human perspective, I think Usalapin is quite intelligent; and I think intelligence whether it is for robots, humans, other animals are not only in the mind but directly related to the body, its shapes and functions. I do believe that we can’t generate intelligence dissociating the body and the mind. 

 

Finally, I think the main challenge in created AIs and intelligent robots, is first in foremost in the will that we put on creating those things. I think it’s vain wanting to create and evaluate AIs based on a human perspective of what intelligence is. I think robots will never be as intelligent as us, because I think it’s another form of intelligent and thus non-comparable. We should focus on creating AIs able to develop themselves alone (auto-editing) rather than creating something comparable to us. 

 

-Valentin Ritou

Before taking this class, I thought that it is easy for robots to have minds and to become smarter than human. However, my thought is completely changed in sumo robot projects. I found one big problem for robots.

That is "Robots can't understand purposes."  In the case of our robot Usalapin, he can detect  an enemy by sensors and go toward it to beat it, but he don't understand the purpose beating enemy in sumo wrestling. I realized that robots only works as human designed.If robots work to suit a purpose, it is just because we designed them to so not because they want to suit a purpose.Moreover, sometimes robots, don't work even as human designed. Virtually, our robot went outside of arena in sumo competition despite we designed him as he can identify between outside and inside.

By the way, if robots can't understand purposes, they  can't have free will. In other words, they can't have minds. What is needed for free will? The answer is criterion.Someone has free will means he/she can make choice by themselves. Robots don't understand purposes, so they don't have criterions for choices. This is why robots can't have free will and minds.Of course AIs are much smarter than our robots and sometimes AIs seem to have free will. However, it is just because human designed them as act as if they have free will.

Through M&M class, I deeply thought about the question  "Are robots can have mind?" and now I think answer is "No."  Nevertheless, I hope that someday technology advances and robots come to have mind.

 

​-Arata MIyachika

 

Because of my business, I could hardly attend our team project so I was in charge of line tracking. I easily could program simulator to track gentle line but it was hard to program to track winding line in spite of using same program.  Just after changing one number, simulator immediately went out of the line. I was able to get a good response when I changed the numbers little by little. In our class,I heard about auto driving car and I thought it will come true then but now I doubt it because I know the difficulty of line tracking. However  much technology develops, human's eye system will be needed in driving. I contributed a little in robot design. Smaller wheels is my idea.In lever principle, smaller radius will generate big power,actually when we changed robot's wheels smaller,it became able to win dramatically.

 

  I think our robot is not smart because it often looses sight of the enemy immediately. if it turns extra, it can't discover the object and it will keep turning.   Not like human beings it can't detect indicative so it can't be smarter. But I hope that robot become smarter and help us a lot.

Fukutaro Tazawa

bottom of page