Skip to main content
  1. Home
  2. Emerging Tech
  3. News

Meta’s own employees are having a hard time digesting AI. Who would’ve thought?

Add as a preferred source on Google
Artificial Intelligence
Unsplash

If you wanted a snapshot of what it looks like when a tech giant tries to force-feed its workforce an AI future, look no further than Meta right now. The company that built its empire on knowing everything about its users has turned that same appetite inward, and its employees are not happy about it. Last month, Meta quietly informed tens of thousands of its U.S. workers that their corporate laptops would begin tracking their keystrokes, mouse movements, clicks, and screen activity. The purpose was to feed that behavioral data into Meta’s AI models so they could learn how people actually use computers. The reaction was immediate — within hours, internal comment threads were flooded with anger, confusion, and more than a hundred emoji reactions that left little to the imagination about how employees felt.

When an engineering manager asked how to opt out, Meta’s chief technology officer, Andrew Bosworth, had a blunt answer: there was no opt-out, at least not on a company laptop. This is the same company that is also tying AI tool usage to performance reviews, running mandatory “AI Transformation Weeks” to retrain its workforce, and building internal dashboards that gamify how many AI tokens employees consume in a day — a metric so aggressively tracked that some workers started building AI agents to manage their other AI agents. The whole thing started to resemble a feedback loop eating itself.

The layoffs just made everything worse

None of this is happening in a vacuum. On April 17, news broke that Meta was planning to cut roughly 10% of its workforce — around 8,000 people — with the first wave scheduled for May 20. Employees who had spent weeks being told to embrace AI, train with AI, and now have their computer behavior harvested to train AI were suddenly also wondering whether they had spent that time building their own replacements. The timing was, to put it generously, awful. Internal posts described the mood as “incredibly demoralizing.” At least three countdown websites appeared, tracking the days to the layoff date. Employees circulated nihilistic memes. One popular internal post simply read: “It does not matter.”

Mark Zuckerberg addressed the data collection at a company-wide meeting, framing it not as surveillance but as a way to teach AI how “smart people use computers to accomplish tasks.” He also noted that AI is “probably one of the most competitive fields in history” — a line that landed differently for people sitting in an office, wondering if they’d still have a job in three weeks.

This is just a preview of what’s coming everywhere

What’s unfolding at Meta isn’t limited to Meta; it’s just further along than most. Microsoft, Coinbase, and Block have all made similar moves recently, restructuring around AI that has led to layoffs and internal friction. The difference is that Meta is doing all of it simultaneously and at scale: retraining workers, surveilling their behavior, tying job security to AI adoption metrics, and cutting headcount to fund the whole endeavor.

There is no clean way to do any of this. An employee revolt over keystroke tracking at one of the world’s most powerful technology companies — one that is, among other things, actively building AI systems designed to monitor and understand human behavior — is its own kind of irony. Meta spent years convincing billions of people to share their data willingly. Getting its own employees on board is proving considerably harder.

Shimul Sood
Shimul is a contributor at Digital Trends, with over five years of experience in the tech space.
Rice grain-sized sensor could give robots a delicate touch and keep them from breaking stuff
Sprout Robot

Robots are incredibly precise, but being gentle is not always their strong suit. A machine that can build a car with near-perfect accuracy can still apply too much pressure when working in places where even the smallest mistake matters, like inside a human eye or during delicate surgery. That is why researchers at Shanghai Jiao Tong University are developing a new type of force sensor that could help robots “feel” what they are touching more accurately.

The sensor is tiny, about the size of a grain of rice at just 1.7 millimeters wide, making it small enough to fit inside advanced surgical tools. What makes it especially interesting is that it does not rely on traditional electronics. Instead, it uses light to measure force from every direction, including pressure, sliding movements, and twisting. Here is how it works. At the tip of an optical fiber sits a soft material that slightly changes shape when it comes into contact with something. That tiny deformation alters how light travels through the sensor. The altered light pattern is then sent through optical fibers to a camera, which captures it like an image. Researchers then use a machine learning model to study those light patterns and translate them into precise force readings. In simple terms, the system learns how to “read” touch through light alone, without needing a bunch of wires or multiple separate sensors packed into such a tiny space.

Read more
Sci-fi got the gadgets right, but the vibes wrong
Sci-fi got plenty of consumer tech right, but reality keeps delivering the useful, compromised version of the dream
Officer K looking up at a neon-colored hologram in Blade Runner 2049.

I was recently waiting for an Uber when the GPS decided to lie for sport. The car was somewhere nearby, I was somewhere nearby, and somehow both of us were trapped in that modern ritual of wrong pins, slow turns, vague waving, and "I'm here" messages that help absolutely no one.

That was when I had a very reasonable thought: this is exactly where a hologram of a giant arrow pointing at me would be useful.

Read more
Google’s Gemini Intelligence leak has me excited, but please not that name
Gemini Intelligence sounds like Apple Intelligence’s Android cousin
Google Gemini on Phone

While Google is helping Apple upgrade its AI, the search giant may have taken a little too much liking to the Apple Intelligence name. A new leak shared by Mysticleaks on Telegram seems to show “Gemini Intelligence” inside Google’s software running on what looks like a Pixel smartphone.

For now, it is best to take the leak with a grain of salt until there is something more concrete. But if the video is accurate, Google could be preparing the feature for the Pixel 11 series, which is expected to launch around August 2026.

Read more