Home Blog Newsfeed MIT’s TactStyle: 3D Modeling You Can Feel with AI
MIT’s TactStyle: 3D Modeling You Can Feel with AI

MIT’s TactStyle: 3D Modeling You Can Feel with AI

3D modeling is crucial in industries like Hollywood CGI and product design, typically relying on text or image prompts for visual appearance. However, these tools often lack realism by overlooking the sense of touch. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed “TactStyle,” a new system that incorporates tactile properties into 3D models using image prompts, replicating both visual and tactile aspects.

Tactile properties, such as roughness and texture, are fundamental to how we perceive physical objects. Existing methods require CAD expertise and rarely support tactile feedback. TactStyle addresses this by allowing creators to stylize 3D models based on images, incorporating expected tactile properties. This tool separates visual and geometric stylization, replicating both visual and tactile properties from a single image input.

Faraz Faruqi, the lead author and PhD student, envisions TactStyle with far-reaching applications, including home decor, personal accessories, and tactile learning tools. Users can customize designs with desired styles and textures. In education, TactStyle allows learners to explore diverse textures, while in product design, it facilitates rapid prototyping by refining tactile qualities.

“You could imagine using this sort of system for common objects, such as phone stands and earbud cases, to enable more complex textures and enhance tactile feedback in a variety of ways,” says Faruqi. “You can create tactile educational tools to demonstrate a range of different concepts in fields such as biology, geometry, and topography.”

Traditional methods for replicating textures use tactile sensors like GelSight, which capture surface microgeometry. However, this requires a physical object. TactStyle leverages generative AI to create a heightfield directly from an image, replicating surface microgeometry without needing a physical reference.

Customizing existing designs from platforms like Thingiverse can be challenging. TactStyle enables high-level customization of downloadable models while preserving functionality. In experiments, TactStyle showed significant improvements over traditional stylization methods by generating accurate correlations between a texture’s visual image and its heightfield, replicating tactile properties from an image.

A psychophysical experiment revealed that users perceive TactStyle’s generated textures as similar to both the expected tactile properties from visual input and the tactile features of the original texture, creating a unified tactile and visual experience.

TactStyle utilizes “Style2Fab” to modify color channels to match the input image’s visual style. A fine-tuned variational autoencoder translates the input image into a corresponding heightfield, modifying the model’s geometry to create tactile properties. The core innovation lies in the geometry stylization module, which uses a fine-tuned diffusion model to generate heightfields from texture images.

The team aims to extend TactStyle to generate novel 3D models using generative AI with embedded textures, replicating both the form and function of fabricated 3D models. They also plan to investigate “visuo-haptic mismatches” to create novel experiences with materials that defy conventional expectations.

The research paper was co-authored by Faruqi, Stefanie Mueller, Maxine Perroni-Scharf, Yunyi Zhu, Jaskaran Singh Walia, Shuyue Feng, and Donald Degraen.

Add comment

Sign Up to receive the latest updates and news

Newsletter

Bengaluru, Karnataka, India.
Follow our social media
© 2025 Proaitools. All rights reserved.