Skip to main content
  1. Home
  2. Emerging Tech
  3. Legacy Archives

Hands on with Leonar3do, the next big thing in 3D modeling

Add as a preferred source on Google
Image used with permission by copyright holder

As recent University of Iowa graduate Zach Arenson will tell you, the learning curve for an average 3D modeling software is anywhere from several hundred hours to weeks. But with a new tool from Hungarian start-up Leonar3do, these hours dropped to just a few days. I know what you’re thinking: It’s obvious the company paid this kid to come boast about the product and how wonderful it makes his life. But after my hands-on demo, it was only a few minutes before I was drawing my own (unintentionally phallic) version of the Death Star.

The magic comes in two parts: The 3D modeling software and the physical tool, the Bird. The Bird is a tripod-esque pen that allows users to drag and shift the 3D model, pinching and pulling until they manipulate the object into their desired shape. A combination of three line sensors attached to the monitor and 3D glasses also allows the user to look around the virtual object, making it seem like the item is right in front of you. If you tilt your head to the left, you can see the side of the object all the way around to the back.

The fascinating aspect of this Leonar3do technology is how intuitive the experience feels. Arenson is right in that the learning curve is extremely small; if you already have some working knowledge of Photoshop, you’ll come to find the brush stroke percentage and sizing tools rather familiar. However, Leonar3Do representative Ronald Manyai also said the company held a contest at a local school and students were able to model a 3D car in just one weekend.

Recommended Videos

There are many ways one can toy around with the Leonar3do technology: You can buy the tool kit which comes with the Bird pen, triangulation sensors, and 3D googles, purchase just the software, or download the app and use your mobile phone as the main 3D controller. In the last case, you would hover the phone in front of the monitor and use the buttons on your phone’s screen to shapeshift your virtual 3D sculpture.

It’s a fun tool for both designers and educators to help students learn the basics of 3D modeling and logic even without an ounce of programming or architectural knowledge. At a cost of $2,000 per software, it’s not a completely unrealistic price point for classrooms across America but Manyai says a cheaper $50 version will launch in the coming months for those looking to experiment. The accompanying app will come to both the App Store and Google Play in March.

Here’s a demonstration video from Leonar3do showcasing a much more appealing model than my tentacle planet.

Natt Garun
An avid gadgets and Internet culture enthusiast, Natt Garun spends her days bringing you the funniest, coolest, and strangest…
6 things Gemini Intelligence is about to do across your Android devices
Logo, Disk, Symbol

Google is bringing Gemini Intelligence to Android, which brings the best of Gemini to its most intelligent devices. The company really wants you to get your work done by Gemini throughout the day, all while staying in control and keeping your data private. Google is rolling out these features starting with the Samsung Galaxy and Google Pixel devices this summer. Furthermore, we’ll see these features on other Android devices, including watches, cars, glasses, and laptops, later this year.

Your assistant is about to get a lot more hands-on, without you having to ask twice

Read more
Google’s next Chrome update is a big deal for Android users
Electronics, Mobile Phone, Phone

Gemini is clearly becoming the centerpiece of Google’s AI strategy, and that focus is now extending deep into Chrome on Android. Starting in June, Chrome is getting a fresh wave of AI-powered features built around Gemini, and the goal is pretty simple: turn your browser into something that actually helps you think, plan, and act, instead of just showing you pages.

Chrome is about to get a little too helpful in the best way

Read more
Rice grain-sized sensor could give robots a delicate touch and keep them from breaking stuff
Sprout Robot

Robots are incredibly precise, but being gentle is not always their strong suit. A machine that can build a car with near-perfect accuracy can still apply too much pressure when working in places where even the smallest mistake matters, like inside a human eye or during delicate surgery. That is why researchers at Shanghai Jiao Tong University are developing a new type of force sensor that could help robots “feel” what they are touching more accurately.

The sensor is tiny, about the size of a grain of rice at just 1.7 millimeters wide, making it small enough to fit inside advanced surgical tools. What makes it especially interesting is that it does not rely on traditional electronics. Instead, it uses light to measure force from every direction, including pressure, sliding movements, and twisting. Here is how it works. At the tip of an optical fiber sits a soft material that slightly changes shape when it comes into contact with something. That tiny deformation alters how light travels through the sensor. The altered light pattern is then sent through optical fibers to a camera, which captures it like an image. Researchers then use a machine learning model to study those light patterns and translate them into precise force readings. In simple terms, the system learns how to “read” touch through light alone, without needing a bunch of wires or multiple separate sensors packed into such a tiny space.

Read more