Skip to main content
  1. Home
  2. Emerging Tech
  3. Cars
  4. News

Police pull over a Google self-driving car for driving too cautiously

Add as a preferred source on Google

When we first heard that one of Google’s self-driving cars had been pulled over by cops, we thought one of its little pod-like vehicles might’ve been caught hammering along the highway during a sneaky one-off test of its ability to handle high speeds.

But no. It was actually stopped for going too slow. That’s right, a Mountain View traffic cop this week spotted the cute little pod puttering along the street at a speed presumably not much faster than a granny walking a dog, and decided to have a word.

Recommended Videos

Facebook user Zandr Milewski, who watched the incident unfold (presumably at a snail’s pace), grabbed a shot of the officer beside what looks like a cartoon cop car but is actually one of Google’s self-driving motors (incidentally, that dome on top contains some of the car’s all-important sensors).

After Milewski posted the amusing picture on the social networking site, the Mountain View Police Department confirmed the incident, explaining that one of its traffic cops had “noticed traffic backing up behind a slow moving car,” and as a consequence decided to pull it over.

“As the officer approached [it] he realized it was a Google Autonomous Vehicle,” the statement said, adding, “The officer stopped the car and made contact with the operators to learn more about how the car was choosing speeds along certain roadways and to educate the operators about impeding traffic.”

Meanwhile, Google, too, decided to milk the incident for all it’s worth, jumping online to say that “after 1.2 million miles of autonomous driving (that’s the human equivalent of 90 years of driving experience), we’re proud to say we’ve never been ticketed,” suggesting the Googler cooped up in the car managed to convince the cop that moving at a crawl was sensible under the circumstances, even if it did rile the drivers stuck behind it.

While admitting that it must be pretty rare to get pulled over for driving too slowly, Google explained that for safety reasons it’d decided to cap the speed of its prototype car at 25 mph.

“We want them to feel friendly and approachable, rather than zooming scarily through neighborhood streets,” the message said, adding, “Like this officer, people sometimes flag us down when they want to know more about our project.”

Google is currently testing its self-driving technology using 21 pod-like prototype cars on the roads around its Mountain View headquarters, with a further four tootling about the streets of Austin, Texas. It also has 23 Lexus SUVs on the road using some of the gear.

Traveling so slowly, you might think there’d be a few rear-endings involving Google’s self-driving cars, but data released by the company shows that over the last two months all of its vehicles have avoided such an incident, or any kind of accident for that matter.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
Rice grain-sized sensor could give robots a delicate touch and keep them from breaking stuff
Sprout Robot

Robots are incredibly precise, but being gentle is not always their strong suit. A machine that can build a car with near-perfect accuracy can still apply too much pressure when working in places where even the smallest mistake matters, like inside a human eye or during delicate surgery. That is why researchers at Shanghai Jiao Tong University are developing a new type of force sensor that could help robots “feel” what they are touching more accurately.

The sensor is tiny, about the size of a grain of rice at just 1.7 millimeters wide, making it small enough to fit inside advanced surgical tools. What makes it especially interesting is that it does not rely on traditional electronics. Instead, it uses light to measure force from every direction, including pressure, sliding movements, and twisting. Here is how it works. At the tip of an optical fiber sits a soft material that slightly changes shape when it comes into contact with something. That tiny deformation alters how light travels through the sensor. The altered light pattern is then sent through optical fibers to a camera, which captures it like an image. Researchers then use a machine learning model to study those light patterns and translate them into precise force readings. In simple terms, the system learns how to “read” touch through light alone, without needing a bunch of wires or multiple separate sensors packed into such a tiny space.

Read more
Meta’s own employees are having a hard time digesting AI. Who would’ve thought?
Artificial Intelligence

If you wanted a snapshot of what it looks like when a tech giant tries to force-feed its workforce an AI future, look no further than Meta right now. The company that built its empire on knowing everything about its users has turned that same appetite inward, and its employees are not happy about it. Last month, Meta quietly informed tens of thousands of its U.S. workers that their corporate laptops would begin tracking their keystrokes, mouse movements, clicks, and screen activity. The purpose was to feed that behavioral data into Meta's AI models so they could learn how people actually use computers. The reaction was immediate — within hours, internal comment threads were flooded with anger, confusion, and more than a hundred emoji reactions that left little to the imagination about how employees felt.

When an engineering manager asked how to opt out, Meta's chief technology officer, Andrew Bosworth, had a blunt answer: there was no opt-out, at least not on a company laptop. This is the same company that is also tying AI tool usage to performance reviews, running mandatory "AI Transformation Weeks" to retrain its workforce, and building internal dashboards that gamify how many AI tokens employees consume in a day — a metric so aggressively tracked that some workers started building AI agents to manage their other AI agents. The whole thing started to resemble a feedback loop eating itself.

Read more
Sci-fi got the gadgets right, but the vibes wrong
Sci-fi got plenty of consumer tech right, but reality keeps delivering the useful, compromised version of the dream
Officer K looking up at a neon-colored hologram in Blade Runner 2049.

I was recently waiting for an Uber when the GPS decided to lie for sport. The car was somewhere nearby, I was somewhere nearby, and somehow both of us were trapped in that modern ritual of wrong pins, slow turns, vague waving, and "I'm here" messages that help absolutely no one.

That was when I had a very reasonable thought: this is exactly where a hologram of a giant arrow pointing at me would be useful.

Read more