Thursday 26 January 2012

ARTIFICIAL INTELLIGENCE IS SMARTER THAN YOU THINK

 
This is the one of the business intelligence that world introduce to others. This helpful trash-eating robot prototype rolled through the streets of Peccioli, Italy this summer, chomping trash and recyclables. Just like a real life wall-e the bot gathers the rubbish and turn it into recyclable, organic (hope to be composed) and waste.
It’s not just only put this fancy little guy on the streets without any use of it. Trash robot like this are being slightly consider as a practicable alternative to huge, inefficient garbage truck that always can’t fit down narrow some streets. This make the garbage robotic is being us systematically. Thus can keep the street always clean. They also can save energy and pollution, while hopefully helping to cut down on littering. The bots could be called to action with a citizen’s cell phone, as well as going door-to-door and identifying “customers” with a unique PIN. The DustCart is still a while from actually going to work on the crowded streets, but looking at urban waste management solutions like this one is a huge step forward for cities all around the world. 
It will make the street always clean as the robot is being used to eat the waste. This kind of business intelligence is enviromental robot. Thus , can make our earth clean from many garbage or waste. 





Google Cars Drive Themselves,in Traffic

The car (Toyota Prius) is a project of Google, which has been working in secret but in plain view on vehicles that can drive themselves, using artificial-intelligence software that can sense anything near the car and mimic the decisions made by a human driver.
With someone behind the wheel to take control if something goes awry and a technician in the passenger seat to monitor the navigation system, seven test cars have driven 1,000 miles without human intervention and more than 140,000 miles with only occasional human control. One even drove itself down Lombard Street in San Francisco, one of the steepest and curviest streets in the nation. The only accident, engineers said, was when one Google car was rear-ended while stopped at a traffic light.
Autonomous cars are years from mass production, but technologists who have long dreamed of them believe that they can transform society as profoundly as the Internet has.
Robot drivers react faster than humans, have 360-degree perception and do not get distracted, sleepy or intoxicated, the engineers argue. They speak in terms of lives saved and injuries avoided — more than 37,000 people died in car accidents in the United States in 2008. The engineers say the technology could double the capacity of roads by allowing cars to drive more safely while closer together. Because the robot cars would eventually be less likely to crash, they could be built lighter, reducing fuel consumption. But of course, to be truly safer, the cars must be far more reliable than, say, today’s personal computers, which crash on occasion and are frequently infected.
The Google research program using artificial intelligence to revolutionize the automobile is proof that the company’s ambitions reach beyond the search engine business. The program is also a departure from the mainstream of innovation in Silicon Valley, which has veered toward social networks and Hollywood-style digital media.
During a half-hour drive beginning on Google’s campus 35 miles south of San Francisco last Wednesday, a Prius equipped with a variety of sensors and following a route programmed into the GPS navigation system nimbly accelerated in the entrance lane and merged into fast-moving traffic on Highway 101, the freeway through Silicon Valley.
It drove at the speed limit, which it knew because the limit for every road is included in its database, and left the freeway several exits later. The device atop the car produced a detailed map of the environment.

The car then drove in city traffic through Mountain View, stopping for lights and stop signs, as well as making announcements like “approaching a crosswalk” (to warn the human at the wheel) or “turn ahead” in a pleasant female voice. This same pleasant voice would, engineers said, alert the driver if a master control system detected anything amiss with the various sensors.
The car can be programmed for different driving personalities — from cautious, in which it is more likely to yield to another car, to aggressive, where it is more likely to go first.
Christopher Urmson, a Carnegie Mellon University robotics scientist, was behind the wheel but not using it. To gain control of the car he has to do one of three things: hit a red button near his right hand, touch the brake or turn the steering wheel. He did so twice, once when a bicyclist ran a red light and again when a car in front stopped and began to back into a parking space. But the car seemed likely to have prevented an accident itself.
When he returned to automated “cruise” mode, the car gave a little “whir” meant to evoke going into warp drive on “Star Trek,” and Dr. Urmson was able to rest his hands by his sides or gesticulate when talking to a passenger in the back seat. He said the cars did attract attention, but people seem to think they are just the next generation of the Street View cars that Google uses to take photographs and collect data for its maps.
The project is the brainchild of Sebastian Thrun, the 43-year-old director of the Stanford Artificial Intelligence Laboratory, a Google engineer and the co-inventor of the Street View mapping service.
In 2005, he led a team of Stanford students and faculty members in designing the Stanley robot car, winning the second Grand Challenge of the Defense Advanced Research Projects Agency, a $2 million Pentagon prize for driving autonomously over 132 miles in the desert.
Besides the team of 15 engineers working on the current project, Google hired more than a dozen people, each with a spotless driving record, to sit in the driver’s seat, paying $15 an hour or more. Google is using six Priuses and an Audi TT in the project.One way Google might be able to profit is to provide information and navigation services for makers of autonomous vehicles. Or, it might sell or give away the navigation technology itself, much as it offers its Android smart phone system to cellphone companies.


But the advent of autonomous vehicles poses thorny legal issues, the Google researchers acknowledged. Under current law, a human must be in control of a car at all times, but what does that mean if the human is not really paying attention as the car crosses through, say, a school zone, figuring that the robot is driving more safely than he would?
And in the event of an accident, who would be liable — the person behind the wheel or the maker of the software?
“The technology is ahead of the law in many areas,” said Bernard Lu, senior staff counsel for the California Department of Motor Vehicles. “If you look at the vehicle code, there are dozens of laws pertaining to the driver of a vehicle, and they all presume to have a human being operating the vehicle.”
The Google researchers said they had carefully examined California’s motor vehicle regulations and determined that because a human driver can override any error, the experimental cars are legal. Mr. Lu agreed.


A "cybernetic human" HRP-4C

A robot that designed to look like an average Japanese woman, looks surprised, opening its mouth and eyes in reaction during a demonstration in Tsukuba, near Tokyo, Monday, March 16, 2009.
A new walking, talking robot from Japan has a female face that can smile and has trimmed down to 95 pounds to make a debut at a fashion show. But it still hasn't cleared safety standards required to share the catwalk with human models.


Developers at the National Institute of Advanced Industrial Science and Technology, a government-backed organization, said their "cybernetic human," shown Monday, wasn't ready to help with daily chores or work side by side with people — as many hope robots will be able to do in the future. 

"Technologically, it hasn't reached that level," said Hirohisa Hirukawa, one of the robot's developers. "Even as a fashion model, people in the industry told us she was short and had a rather ordinary figure."
For now, the just over five-foot-two black-haired robot code-named HRP-4C — whose predecessor had weighed 128 pounds — will mainly serve to draw and entertain crowds.
Developers said the robot may be used in amusement parks or to perform simulations of human movement, as an exercise instructor, for instance.
HRP-4C was designed to look like an average Japanese woman, although its silver-and-black body recalls a space suit. It will appear in a Tokyo fashion show — without any clothes — in a special section just for the robot next week.
The robotic framework for the HRP-4C, without the face and other coverings, will go on sale for about 20 million yen ($200,000 US) each, and its programming technology will be made public so other people can come up with fun moves for the robot, the scientists said.
Japan boasts one of the leading robotics industries in the world, and the government is pushing to develop the industry as a road to growth. Automaker Honda Motor Co. has developed Asimo, which can walk and talk, although it doesn't pretend to look human.
Other robots, like the ones from Hiroshi Kobayashi at the Tokyo University of Science and Hiroshi Ishiguro at Osaka University, have more human-like faces and have been tested as receptionists.


A "Child-robot with Biomimetic Body" or CB2

A bald, child-like creature dangles its legs from a chair as its shoulders rise and fall with rythmic breathing and its black eyes follow movements across the room.
It's not human -- but it is paying attention.

Below the soft silicon skin of one of Japan's most sophisticated robots, processors record and evaluate information. The 130-cm (four-foot, four-inch) humanoid is designed to learn just like a human infant.

"Babies and infants have very, very limited programmes. But they have room to learn more," said Osaka University professor Minoru Asada, as his team's 33 kilogram (73 pound) invention kept its eyes glued to him.

The team is trying to teach the pint-sized android to think like a baby who evaluates its mother's countless facial expressions and "clusters" them into basic categories, such as happiness and sadness.

Asada's project brings together robotics engineers, brain specialists, psychologists and other experts, and is supported by the state-funded Japan Science and Technology Agency.
With 197 film-like pressure sensors under its light grey rubbery skin, CB2 can also recognise human touch, such as stroking of its head.

The robot can record emotional expressions using eye-cameras, then memorise and match them with physical sensations, and cluster them on its circuit boards, said Asada.



Artificial intelligence learning capabilities were recently covered in a TV show I watched about technological advancements which posed the question what the future might hold for us. ASIMO is perhaps one of the most famous, advanced robots that have been developed in Japan that are currently out there . it was primarily known for being one of the first and best robots to actually walk on two legs like humans, even climb stairs. This creation will help our company especially in packaging process . It is also helps our company in arranging the accessories in our store . Thus , can help to keep costs to hire new employees to manage the stocks in our store . 


No comments:

Post a Comment