We proposed an innovative solution designed to transform the museum
experience for visually impaired visitors, while also enhancing the
experience for all participants. MuseTouch leverages Internet of
Things (IoT), Conversational AI Chatbots, and Smart Technologies to
create a 4D immersive experience from 2D paintings. By partnering with
free-to-the-public museums, our goal is to provide an inclusive
environment where visually impaired individuals can navigate exhibits
independently.
Visitors would wear headphones and haptic gloves, allowing them to
interact with conversational AI, while immersive sounds and tactile
feedback deepen their understanding and connection with the artwork
and its creators. This project addresses the critical need to make art
more accessible, opening up new opportunities for engagement and
inclusion in the museum space.
I worked with two other UX Researchers in building this project as
well as relied on the critiques of my two professors and classmates to
refine our project
Role
UX Researcher & UX Designer
Project
Advanced Interaction Design Studio
Timeline
Seven Weeks
Tools
Figma, Miro, Traditional Tools, Google Suite, iMovie
Why Art Museums?
We selected art museums as the focus for our technologies. While art
museums provide diverse and enriching experiences to visitors, they
also present significant challenges. Most notably, exhibitions are
predominantly visual and typically prohibit touching the artworks.
This environment can be particularly exclusionary for visually
impaired visitors. Motivated by these challenges, we committed
ourselves to innovating a solution that would make art museums
accessible to everyone.
The Importance of Accessibility in Museums
In the course of developing our immersive experience for visually
impaired individuals in museums, we undertook several explorations
that informed our concept. Technically, we integrated a tactile tablet
with touch sensors, bluetooth headphones, and LiDAR technology to
detect how users interact with the environment. These IoT elements
allow our Conversational AI to respond dynamically, fostering
meaningful interactions. Users engage with the system through
headphones, a microphone, and a haptic tablet, enabling a
multi-sensory experience.
Market research highlighted a significant gap: existing tools for
visually impaired museum visitors are limited to basic aids or
monotone text-to-speech systems, which fail to provide a truly
immersive experience. By addressing this gap, we aim to create an
inclusive environment that enhances the museum experience for all
visitors, regardless of ability.
Additionally, involving artists in the process ensures that their work
is respected and authentically translated into a 4D model. This
collaboration preserves the integrity of the artwork while making it
accessible in a novel, interactive format. Our approach not only opens
up new possibilities for accessibility but also redefines how art can
be experienced in a museum setting.
IoT, Conversational AI Bots, & Smart Technologies
In our class project, we were assigned the challenge of utilizing IoT
and Conversational AI Bots. Building on this foundation, we were
encouraged to integrate smart technologies to create a fully immersive
museum experience. Our approach aimed to push the boundaries of
traditional museum interactions, leveraging cutting-edge technology to
engage users in a dynamic and inclusive way.
IoT and Conversational AI Bots (AI Chatbots)
In our preliminary research, we focused on understanding how IoT and
AI Chatbots could enhance accessibility in public spaces, particularly
for individuals with disabilities. We identified several key areas
that could benefit from improved accessibility, such as transportation
hubs, cultural sites, and museums. These spaces often lack the
necessary tactile, visual, and auditory aids needed to support all
visitors, especially those who are visually or cognitively impaired.
IoT technologies, like touch sensors and beacons, offer opportunities
to create more interactive and inclusive environments by providing
automated, context-aware responses to user actions.
For example, in museums, IoT can facilitate optimized route pathing
and interactive storytelling, making exhibits more accessible and
engaging. AI-powered chatbots, with their ability to learn and adapt,
can further enhance these experiences by offering dynamic, real-time
interactions tailored to the user's needs. By integrating these
technologies, we aimed to create accessible experiences that are not
only functional but also enriching, breaking down barriers and
ensuring that everyone, regardless of ability, can fully engage with
and enjoy these public spaces.
Ideation Matchmaking
During the ideation and matchmaking process, our team utilized a
structured approach to generate innovative solutions aimed at
enhancing accessibility and inclusivity in public spaces. We combined
elements from various categories—such as public spaces, target
personas, and action verbs—to explore how IoT and conversational AI
could be leveraged in new and meaningful ways. By brainstorming ideas
across different contexts, like museums, historical landmarks, and
national parks, we developed concepts that focused on the unique needs
of diverse user groups, including individuals with visual impairments,
mobility disabilities, and language disorders.
Through this process, we not only generated a wide range of ideas but
also refined them by considering real-life scenarios, the potential
impact on users, and the overall feasibility of implementation. This
methodical ideation phase allowed us to identify the most promising
solutions that could be further developed into prototypes, ensuring
that our designs were both innovative and deeply rooted in
user-centered principles.
Concepting and a bit of a Pivot
From our ideation matchmaking, we decided to choose an idea that
seemed the most unique as we wanted to challenge ourselves with an
innovative design. This led us to choose national landmarks as our
public space to assist the disabled using our technology.
National Landmarks are Cool!
Our initial concept focused on creating small-scale replicas of
national landmarks embedded with interactive technology to enhance
accessibility. These models were designed with sensors that would
activate a conversational AI to provide spoken information, allowing
visually impaired individuals to "visualize" landmarks through touch
and sound. We envisioned complementing these models with related
artifacts, such as textured layouts and miniature dioramas, to create
a more immersive and tactile experience.
As part of our research, we began by exploring accessibility solutions
specific to national landmarks. This led us to examine how museums
have successfully implemented accessibility guidelines, such as using
miniature sculptures and physical artifacts to help those with
accommodations engage with exhibits. We were inspired by these
strategies and considered how they could be applied to national
landmarks. We initially also explored ideas to assist individuals with
disabilities in navigating national landmarks but realized that our
project had become too broad in scope. To ensure a more focused and
impactful solution, we decided to concentrate on the development of
national landmark artifacts.
. . . But Museums are Cooler!
Our pivot to museums was driven by a deeper analysis of technical
feasibility, financial viability, and user desire for our concepts. We
recognized that while museums have already made strides in
accessibility, national landmarks have yet to widely adopt similar
technologies. This gap presented an opportunity to extend our research
to national landmarks. Additionally, companies like Tactile Studio
have successfully replicated models of artwork with embedded
technology, supporting the viability of our concept. By integrating
IoT into these models and activating conversational AI through sensor
mechanisms like touch or heat sensors, we saw a clear path to enhance
accessibility in both museums and, potentially, national landmarks.
This realization led us to focus on museums as a starting point, where
the context and technology could be more effectively applied.
This is an artifact of our whiteboarding process that we did every
class when brainstorming new ideas for our project. Although quite
messy, it helped us through our thoughts!
In our user research phase, we initiated a field study at the Carnegie
Museum of Art, where we engaged with museum visitors, gallery staff,
and diversity officers to gain insights into the current state of
accessibility in museums. These interviews highlighted a universal
need for enhanced visual accessibility, with one visitor noting, "I
believe that increasing any visual accessibility in a museum would be
helpful." However, we found that existing solutions are often limited
to certain types of art, such as simulated paint strokes, and fail to
provide a comprehensive experience for all visitors. This gap pointed
to a demand for a more immersive, interactive exploration of art that
could be accessible to everyone.
Our literature review supported these findings, revealing a
significant accessibility gap for blind and visually impaired museum
visitors. Technologies like audio descriptions and tactile interfaces,
while beneficial, are not widely adopted. This realization prompted us
to consider how we could replicate the visual experience of art
through tactile means, leading us to explore various technological
solutions.
Throughout this process, we continuously reframed and refined our
ideas based on feedback from users, our professors, and peers. This
iterative approach allowed us to move towards a more effective
solution, ultimately focusing on technologies like touchpad devices
that offer tactile feedback, ensuring an inclusive and enriching
experience for all museum visitors.
First Concept Presentation
After extensive research, we then presented our concept to the class and
our professors. With this, we described the concept of MuseTouch, our
technical exploration, aesthetics we aimed for, and our process. All of
which, you have already about here or will be reading about soon :)
What is MuseTouch Exactly?
To summarize our concept so far, MuseTouch is a cutting-edge solution
designed to make art accessible to everyone, including those with visual
impairments. Upon entering the museum, visitors can pick up a set of
headphones and a tactile tablet. This portable device accompanies them
throughout the gallery, allowing for an immersive, interactive
experience.
Equipped with IoT sensors strategically placed around the museum,
MuseTouch enables users to feel the texture and details of the artwork
directly on the tablet's tactile screen. As they explore each piece, the
integrated AI provides real-time guidance, offering insightful
commentary and engaging in interactive conversations about the art. This
personalized experience bridges the gap between traditional and
accessible art viewing, making the museum visit more inclusive and
enriching for all.
The Storyboard
In our storyboard, Jane, who is visually impaired, struggles to connect
with visual-centric art. Her friend attempts to describe the artwork but
falls short, leaving Jane feeling disheartened.
To enhance her experience, an IoT and AI device is introduced. When the
art is detected by the device's LiDAR sensor, a tactile screen displays
images of the artwork, allowing Jane to explore it through touch. The AI
also provides guided narration, enriching her understanding of the
piece.
Jane's friend can see the same display and engage in meaningful
discussions about the art with her. This shared experience brings them
closer, leaving both of them feeling more connected and joyful.
After our presentation, we surprisingly got the following feedback:
"We have no feedback, your concept is strong."
Which, was great news for us! We sufficiently covered all of the core
problems and found solutions for potential issues that might arise with
our product.
. . . And Then We Reached Our Final Presentation
An Introduction to the MuseTouch Tablet:
A tactile screen similar to the Dot Pad
Dynamically changes to match an image with raised dots and paint strokes
Displays picture of painting with raised dots
Bluetooth headphones
Connects to tablet
Can control volume on tablet
An Introduction to the MuseTouch IoT LiDAR Sensors:
LiDAR sensors near art pieces connect with the tablet
The device changes what is displayed when in proximity of the LiDAR
sensor corresponding to the nearest artwork
An Introduction to the MuseTouch Conversational AI Bot:
A Summary of Our Tech
For MuseTouch, we focused on integrating a tactile device tablet,
Bluetooth headphones, LiDAR sensors, and conversational AI to create an
immersive and accessible museum experience. The tactile tablet is
inspired by the Dot Pad technology, which uses touch sensors to create
dynamic, raised images. However, MuseTouch goes further by incorporating
not only raised dots but also paint strokes, enabling users to
experience both flat and three-dimensional artworks in a tactile form.
Sculptures, for example, would be represented with raised dots, allowing
users to feel the contours and details.
Bluetooth headphones connect seamlessly with the tablet, ensuring a
personalized audio experience where the conversational AI provides
guided, immersive narratives about the artwork. This AI is adaptable,
capable of adjusting its language complexity and speech pace based on
the user's preferences, thereby making art accessible and engaging for
everyone without disturbing other visitors.
The experience is further enhanced by LiDAR sensors placed beneath each
artwork, which detect the proximity of the tablet. As the user
approaches a piece, the LiDAR triggers the tablet to update its display
to mirror the artwork above it, ensuring that the tactile representation
is always relevant to the user’s location in the museum. This setup
creates a truly interactive and inclusive environment where visually
impaired visitors can explore art through touch and sound, offering a
unique way to connect with and appreciate museum collections.
Overall, this experience has been eye-opening and has shown me how we
can leverage physical technology to transform learning and exploration
in public spaces. Our work with MuseTouch, which integrates tactile
tablets, Bluetooth headphones, LiDAR sensors, and conversational AI,
illustrates the potential to make museums and other public venues more
engaging and accessible.
By using the tactile tablet to replicate artwork textures and providing
real-time updates through LiDAR sensors, we can offer a richer
experience for everyone, including those with visual impairments. The
Bluetooth headphones and conversational AI further enhance this by
offering personalized, immersive narratives that adjust to each user's
preferences. This approach not only bridges accessibility gaps but also
ensures that all visitors can connect more deeply with the art and
history around them. It's exciting to see how these innovations can
create more inclusive spaces and inspire a broader appreciation of art
and culture.
Thank you for reading!