Home Blog Newsfeed MIT’s TactStyle: 3D Modeling You Can Feel
MIT’s TactStyle: 3D Modeling You Can Feel

MIT’s TactStyle: 3D Modeling You Can Feel

In a leap forward for 3D modeling, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have unveiled TactStyle, a novel system that allows creators to stylize 3D models with both visual and tactile properties derived from image prompts. This innovation bridges the gap between visual appearance and the sense of touch, opening up new possibilities for industries ranging from CGI to product design.

Traditional 3D modeling tools often rely on text or image prompts to define color and form, but they frequently overlook the tactile dimension crucial to human perception. Tactile properties like roughness and bumpiness are fundamental to how we interact with physical objects. Existing methods often demand advanced CAD expertise and lack tactile feedback, limiting their realism.

TactStyle addresses this by enabling users to replicate both visual and tactile properties from a single image input. The system separates visual and geometric stylization, generating accurate correlations between a texture’s visual image and its heightfield, thus replicating tactile properties directly from an image. According to Faraz Faruqi, the lead author of the new paper on the project, “You could imagine using this sort of system for common objects, such as phone stands and earbud cases, to enable more complex textures and enhance tactile feedback in a variety of ways.”

Applications Across Industries

The potential applications of TactStyle are vast. In education, learners can explore diverse textures from around the world without leaving the classroom. In product design, rapid prototyping becomes more efficient, allowing designers to quickly iterate and refine tactile qualities.

Faruqi adds, “You can create tactile educational tools to demonstrate a range of different concepts in fields such as biology, geometry, and topography.” The tool allows users to download a base design from platforms like Thingiverse and customize it with desired styles and textures, preserving functionality even for users without extensive technical expertise.

How TactStyle Works

Unlike traditional methods that require specialized tactile sensors to capture surface microgeometry, TactStyle leverages generative AI to create a heightfield directly from an image of the texture. This innovative approach allows for the replication of surface microgeometry without needing a physical object or its recorded surface.

The system builds upon a preexisting method called “Style2Fab,” which modifies the model’s color channels to match the input image’s visual style. TactStyle uses a fine-tuned variational autoencoder to translate the input image into a corresponding heightfield, which is then applied to modify the model’s geometry, creating the tactile properties.

Experiments have demonstrated that TactStyle significantly improves upon traditional stylization methods. Psychophysical experiments revealed that users perceive TactStyle’s generated textures as similar to both the expected tactile properties from visual input and the tactile features of the original texture, leading to a cohesive tactile and visual experience.

Future Directions

Looking ahead, the team aims to extend TactStyle to generate novel 3D models using generative AI with embedded textures. They also plan to explore “visuo-haptic mismatches” to create novel experiences with materials that defy conventional expectations, such as something that appears to be made of marble but feels like it’s made of wood.

The research team includes Faraz Faruqi and Stefanie Mueller, along with PhD students Maxine Perroni-Scharf and Yunyi Zhu, visiting undergraduate student Jaskaran Singh Walia, visiting masters student Shuyue Feng, and assistant professor Donald Degraen of the Human Interface Technology (HIT) Lab NZ in New Zealand.

Add comment

Sign Up to receive the latest updates and news

Newsletter

Bengaluru, Karnataka, India.
Follow our social media
© 2025 Proaitools. All rights reserved.