Ian Schmidt

Expert
Staff member
Moderator
Messages
1,599
Reaction score
2,655
I'm not too surprised to see this kind of wide scale testing of the system given Toyota wants to make a splash at the Tokyo Olympics.

And with social media filling up with tales and videos of Tesla's auto-summon crashing into garages, other cars, and pedestrians it's a good time for other automakers to show their self-driving working properly.
 

internalaudit

Follower
Messages
299
Reaction score
343

Autonomous vehicles pose a whole bunch of R&D challenges. With so many aspects to consider -- power consumption, safety, user interface and data management, to name just a few -- creating a common computing platform for their use is a big ask of just one company. That's why a group of automotive and tech businesses have joined forces to create the Autonomous Vehicle Computing Consortium (AVCC), in a bid to create a platform that will promote the scalable deployment of automated and autonomous vehicles.

The consortium includes ARM, Bosch, Continental, DENSO, General Motors, NVIDIA, NXP Semiconductors and Toyota (whose P4 Automated Driving Test Vehicle is pictured above), who will collaborate on overcoming some of the most significant challenges posed by autonomous vehicles -- the group's first step will be developing a set of recommendations for a system architecture for the computing platform. According to Alex Harrod of Arm, "The group brings together a unique combination of expertise and a shared goal," and will be welcoming input from other interested parties and members of the automotive ecosystem.
 

CRSKTN

Follower
Messages
159
Reaction score
212

As part of its big robot push for upcoming the 2020 Tokyo Olympic and Paralympic Games, Toyota says it will have 20 of its e-Palette electric vehicles on-site to transport athletes.
Each of the vehicles will travel through the athletes' village at a leisurely 12 miles per hour along a designated loop. As an SAE Level 4-capable autonomous vehicle, the e-Palette will be able to navigate the area all on its own. However, a safety attendant will be onboard each vehicle to ensure nothing goes wrong. Those capabilities put the e-Palette in about the same ballpark as Waymo's current fleet of autonomous vehicles.
 

internalaudit

Follower
Messages
299
Reaction score
343

When tourists arrive in Tokyo for the 2020 Olympic Games next summer, they'll potentially have another exciting experience beyond watching gold-medal athletics. That's because Toyota will be giving visitors free rides in self-driving cars on public roads in the Odaiba district of Tokyo.

Odaiba is a shopping and entertainment district on a man-made island in Tokyo Bay that will become even busier than usual during the Olympics. Events scheduled to take place there include marathon swimming and triathlon in Odaiba Marine Park, beach volleyball in Shiokaze Park, and basketball in Aomi Urban Sports Park. The Toyota Research Institute (TRI), which is leading the self-driving-vehicle project, isn't saying you'll be able to flag down one of the self-driving cars to go to any of these events, just that the car will be on the island at the same time.

The cars will be Platform 4 (P4) automated driving test vehicles that are modified versions of the fifth-generation Lexus LS sedan.

However, Toyota isn't announcing all that many details about the program just yet—details such as exactly how many cars will be used, how long each ride will be, or how people can register to get selected to take a trip. TRI has said that the program will run from July to September 2020 and that it will operate in a "mobility as a service" driving environment. More details will come out as the event gets closer, TRI communication manager Rick Bourgoise told Car and Driver.

The vehicles used in Odaiba will be operating at Level 4 autonomous driving under the SAE's definition. That means humans won't ever need to take control of TRI's Lexus under most conditions. A human safety driver will be in the car at all times, as required by Japanese law, TRI says.

We asked whether drivers will be able to treat the special Lexus LS vehicles as shuttles taking them where they want to go. The answer, it turns out, is no. John Hanson of TRI told C/D that there will be "predetermined routes showcasing the vehicle's capabilities in a broad range of driving environments, challenges, and conditions." He also points out that a Level 4 autonomous vehicle—one that can operate without human input—is required to stay within a set area approved for its use, also known as an Operational Design Domain (ODD), which Toyota is creating specifically for use during the Olympic Games.

Even without the complete details, TRI engineers are confident that the self-driving cars will be able to navigate the Odaiba streets, Bourgoise said. "Our team is capable of accomplishing it, which serves as our next major development milestone," he said.

With nine months to go before the public demonstration starts, TRI will keep working on the AI technology in the U.S. and in Toyota's home country. TRI works with the Advanced R&D Division of Toyota Motor Corporation and Toyota Research Institute-Advanced Development (TRI-AD), which is based in Tokyo and will continue to test "on our closed course in Michigan, public roads in Michigan and California, and on public roads in Japan," Bourgoise said. That includes replicating some of the challenges the cars will find in the "challenging infrastructure" of Odaiba, both on location and on Michigan and California roads.
 

krew

Expert
Staff member
Administrator
Messages
3,340
Reaction score
5,272

Lexus LS sedans equipped with autonomous driving technology will be available for public demonstration rides in Tokyo next summer.
From July to September 2020, the Platform 4 automated driving test vehicle will showcase Toyota’s “Chauffeur” SAE Level-4 capabilities in the Odaiba district of Tokyo. Registration will be open to the public, with individuals selected to participate. In accordance with Japanese law, a Safety Driver will be present during the experience.
Introduced at CES 2019, the P4 test vehicle is based on the fifth-generation Lexus LS sedan. It is being used by the Toyota Research Institute as the development testbed of both “Toyota Guardian” active safety and “Chauffeur” automated driving...
Continue reading...
 

internalaudit

Follower
Messages
299
Reaction score
343

MIT and Toyota researchers have designed a new model to help autonomous vehicles determine when it’s safe to merge into traffic at intersections with obstructed views. Their paper is published in IEEE Robotics and Automation Letters.

In 2016, roughly 23% of fatal and 32% of nonfatal US traffic accidents occurred at intersections, according to a 2018 Department of Transportation study. Automated systems that help driverless cars and human drivers steer through intersections can require direct visibility of the objects they must avoid. When their line of sight is blocked by nearby buildings or other obstructions, these systems can fail.

The researchers developed a model that instead uses its own uncertainty to estimate the risk of potential collisions or other traffic disruptions at such intersections. It weighs several critical factors, including all nearby visual obstructions, sensor noise and errors, the speed of other cars, and even the attentiveness of other drivers. Based on the measured risk, the system may advise the car to stop, pull into traffic, or nudge forward to gather more data.

The researchers tested the system in more than 100 trials of remote-controlled cars turning left at a busy, obstructed intersection in a mock city, with other cars constantly driving through the cross street. Experiments involved fully autonomous cars and cars driven by humans but assisted by the system.

In all cases, the system successfully helped the cars avoid collision from 70 to 100% of the time, depending on various factors. Other similar models implemented in the same remote-control cars sometimes couldn’t complete a single trial run without a collision.

The model is specifically designed for road junctions in which there is no stoplight and a car must yield before maneuvering into traffic at the cross street, such as taking a left turn through multiple lanes or roundabouts. In their work, the researchers split a road into small segments. This helps the model determine if any given segment is occupied to estimate a conditional risk of collision.

Autonomous cars are equipped with sensors that measure the speed of other cars on the road. When a sensor clocks a passing car traveling into a visible segment, the model uses that speed to predict the car’s progression through all other segments. A probabilistic “Bayesian network” also considers uncertainties—such as noisy sensors or unpredictable speed changes—to determine the likelihood that each segment is occupied by a passing car.

Because of nearby occlusions, however, this single measurement may not suffice. Basically, if a sensor can’t ever see a designated road segment, then the model assigns it a high likelihood of being occluded. From where the car is positioned, there’s increased risk of collision if the car just pulls out fast into traffic. This encourages the car to nudge forward to get a better view of all occluded segments. As the car does so, the model lowers its uncertainty and, in turn, risk.

But even if the model does everything correctly, there’s still human error, so the model also estimates the awareness of other drivers.

That depends on computing the probability that a driver saw or didn’t see the autonomous car pulling into the intersection. To do so, the model looks at the number of segments a traveling car has passed through before the intersection. The more segments it had occupied before reaching the intersection, the higher the likelihood it has spotted the autonomous car and the lower the risk of collision.

The model sums all risk estimates from traffic speed, occlusions, noisy sensors, and driver awareness. It also considers how long it will take the autonomous car to steer a preplanned path through the intersection, as well as all safe stopping spots for crossing traffic. This produces a total risk estimate.

That risk estimate gets updated continuously for wherever the car is located at the intersection. In the presence of multiple occlusions, for instance, it’ll nudge forward, little by little, to reduce uncertainty. When the risk estimate is low enough, the model tells the car to drive through the intersection without stopping. Lingering in the middle of the intersection for too long, the researchers found, also increases risk of a collision.

Running the model on remote-control cars in real-time indicates that it’s efficient and fast enough to deploy into full-scale autonomous test cars in the near future, the researchers say. (Many other models are too computationally heavy to run on those cars.) The model still needs far more rigorous testing before being used for real-world implementation in production vehicles.

The model would serve as a supplemental risk metric that an autonomous vehicle system can use to better reason about driving through intersections safely. The model could also potentially be implemented in certain “advanced driver-assistive systems” (ADAS), where humans maintain shared control of the vehicle.

Next, the researchers aim to include other challenging risk factors in the model, such as the presence of pedestrians in and around the road junction.
 
Top