SINGAPORE – Artificial intelligence (AI) is a game changer that is growing exponentially year after year, affecting almost every facet of life. Global management consulting firm McKinsey & Company pointed out in a March 12 report that organisations are already beginning to create the structures and processes that can unlock meaningful value from generative AI (Gen AI). While traditional AI analyses data and makes predictions, Gen AI goes a step further. It uses data to generate new things, such as text, images or code. In Singapore, one of the first institutes of higher learning that embedded forms of AI, such as machine learning, in its curriculum is the Singapore University of Technology and Design (SUTD), which opened in 2009.Established as Singapore’s fourth public university, it aims to advance knowledge and nurture technically grounded leaders and innovators to serve societal needs with a focus on design and technology.Professor Khoo Peng Beng, Head of Pillar at SUTD’s School of Architecture and Sustainable Design, joined the university earlier in 2025. He says AI has light and dark sides – such as job displacement and ethical uses – that need to be rigorously addressed through the university’s curriculum.While the university is inventing AI systems of the future, it is also conducting research on the philosophy and ethics of AI, called “Design AI”. This is an intentional approach, as it is widely accepted that AI and human intelligence are partners in innovation. “As newer technologies like AI evolve, we have to increasingly sharpen our ability to discern what is good, what is the truth and what is beauty,” says Prof Khoo, 56. He says that AI itself is agnostic to either good or evil applications, and is dependent on the intention of the human user.“Of overarching importance is to keep human and AI goals aligned,” he notes. Professor Khoo Peng Beng, SUTD’s Head of Pillar for Architecture and Sustainable Design, with students’ works created using physical and AI-generated models.ST PHOTO: CHONG JUN LIANGAs futurists and inventors, design students speculate on all sorts of future possibilities. For instance, they conduct research about the design of greener cities, the future of work and the future of healthcare. These thought experiments, and the examination of both AI’s light and dark sides, allow students to constantly create provocative scenarios that spark engagement and stimulate debate.“This provides us the opportunity to question these scenarios from different perspectives and see the future a little more clearly,” says Prof Khoo.Students in Singapore also have some of the best teachers. About 20 SUTD lecturers are ranked among the “World’s Top 2%”, a list of leading scientists compiled by Stanford University in the US. “Our students are exposed to an exceptional research environment from their first year in university,” says Prof Khoo. “Every project is a thought experiment about a speculative future or an exploration of experimental possibilities of existing phenomena. This then allows us to steer ourselves and our world to a better and brighter future.”This approach of embracing state-of-the-art AI, as well as striking a balance with its impact on society, has helped groom SUTD students to excel in the real world. Since 2009, SUTD graduates have commanded top dollar in the job market. In a Straits Times report in April 2023, nine out of 10 SUTD graduates found jobs within six months of completing their final examinations, based on the school’s 2023 graduate employment survey of 300 new graduates. More than half of SUTD graduates secured full-time permanent employment in the sectors of information and communications, financial and insurance, and scientific research and development.In February, the results of the Joint Graduate Employment Survey revealed that graduates of information and digital technologies courses continued to take home the highest monthly pay at $5,600, up from $5,500 in 2023. ST scopes out five of SUTD’s most ground-breaking AI projects.Helping students flesh out bright ideasAssociate professor Carlos Banon with a 3D-printed prototype that his FORMAS.AI software produced from his sketches.ST PHOTO: CHONG JUN LIANGA new AI-powered design software converts doodles, sketches or simple shapes into layered architectural renders for pitching to design clients or for buildable architecture. This is the invention of SUTD’s Carlos Banon, associate professor of architecture and sustainable design. For the past three years, he has integrated advanced diffusion models and large language models (LLMs) and conducted cutting-edge training in SUTD’s first-year design workshops or “studios”.Advanced diffusion models are sophisticated machine learning algorithms that generate high-quality data. The application improves the connection between students’ intuition and their visual output, boosting their creativity while helping them maintain control and ownership. Prof Banon experienced first-hand the problem that designers grappled with to maintain control of and execute their vision using AI. His user-friendly application, which he christened FORMAS.AI – “formas” is Spanish for “forms” – turns sketches, coloured shapes and 3D images into textured visualisations and 3D-printable geometry with one click. A pencil sketch of nest-shaped towers (left) is fed into FORMAS.AI and the AI-generated architectural render shows woven timber shells, misty light and lush terrain.PHOTOS: FORMAS.AIMore than 2,000 architects, interior designers, educators and students globally are early users in the private beta version of the application, ahead of its end-May public launch.By making each step of the AI-driven creative process more transparent and controllable, users transition from being led by the AI’s creations to actively guiding its development. This helps preserve their personal style while leveraging data-driven tools.“Since 2022, we have been integrating image-based diffusion models to enhance foundational design skills from the first semester at SUTD,” says Prof Banon, 46. He is also the co-founder of Spanish architectural firm Subarquitectura Architects and co-founder of research laboratory AirLab@SUTD.In his classes, students still draw on tracing paper and use cardboard study models, as they are encouraged to nurture primary thoughts and intuition. But the moment a spark of brilliance takes shape, they sketch the idea and feed it into the FORMAS.AI model. It generates a realistic building image with materials and the surrounding environment for context.“In that way, our students develop a new kind of intuition, connecting very basic, abstract shapes and simple operations to complex spatial structures,” says Prof Banon. “They then develop new ways to operate in the world, reducing production time and focusing on vital areas revealed during the design process.” The software also helps students not proficient in drawing architectural shapes or skilled at visual projection present their ideas – like an artist or film-maker.Currently, most AI image generators such as Midjourney are able to produce stunning visuals but drift far from the original vision or sketches, says Prof Banon. This can entail several rounds of time-consuming prompt rephrasing. On the other hand, FORMAS.AI’s intuitive interface empowers designers to experiment and iterate using basic shapes such as squares, circles or lines on a canvas. The output ranges from a high-resolution architectural rendering to an image that can be fed to 3D printers to produce a prototype.Colour-coded block study of a residential project drawn on the Formas.AI canvas (left) and the same layout, rendered as a photo-ready hillside home showing details in the windows, walls and slope of the terrain.PHOTOS: FORMAS.AI“Innovative architectural design often starts with a personal vision – something only you notice about a site, the climate, a pattern of living,” Prof Banon says.“It is that ‘trained vision’ that architects develop over time, something that can be transformative as we consider many factors – combining science, art and technological intuition. AI should amplify and sharpen that hunch, not overshadow or replace it.”Info: To see examples of what users globally are generating, go to FORMAS.AI An AI-powered Design Brain for architectsAssistant professor Immanuel Koh with 3D-printed architecture study models of high-density housing designs generated from his custom AI model.ST PHOTO: CHONG JUN LIANGTo understand the complex process of incorporating AI into architecture, one needs to first see how different the two components are.When humans draw up architectural plans, they bring a lifetime of lived experiences, sensory input and emotional understanding to the table. They intuitively grasp spatial relationships, human needs and the feeling a space should evoke. They love spaces that evoke an emotional connection such as urban nooks that inspire, comfort, excite or satisfy a specific need.Currently, AI is unable to do this. An AI program needs to be fed large amounts of data which it identifies as patterns, recombining existing elements and optimising results within clearly defined parameters. AI also lacks the physical and emotional experiences that shape human intuition about space and form. Their understanding is based on data, not lived sensation. Assistant professor Immanuel Koh of SUTD’s Architecture and Sustainable Design pillar and Design and Artificial Intelligence programme is working on changing that. His “Design Brain” project was conceived in 2024 with the aim of developing AI models capable of digesting, perceiving, understanding, learning, remembering, generating and evaluating designs – specifically, architectural designs.This is despite architecture being one of the most complex design disciplines in the world, demanding both creativity and logic. Prof Koh has found that while there are many aspects to the project, the problem of spatial design reasoning – and creative “unreasoning” – is the most critical and difficult to solve.For AI to truly power the design process and production, both humans and AI need to first communicate why, when and how specific design decisions are made during the iterative process. “It is difficult because it requires a transdisciplinary conceptual understanding and technical skills that lie at the intersection of design and AI,” says Prof Koh, who also directs Artificial-Architecture at SUTD. Artificial-Architecture is an interdisciplinary research laboratory that focuses on the design and development of deep learning models for artificial creativity, neurocognitive design, generative architecture, predictive urbanism and defence intelligence.Neurocognitive design is Prof Koh’s parallel line of research. It provides empirical understanding of the human brain via brainwave signals emitted when an architect is reasoning or creating designs.Artificial-Architecture has a slate of funded projects that include tie-ups with industry, academia and government agencies, such as AI Singapore and the Urban Redevelopment Authority.Design Brain tries to create visible parallel chains of reasoning design actions throughout the process that can be performed, observed and augmented by human designers and autonomous software agents. An autonomous software agent is capable of making design decisions on its own.“I see AI as having the potential to not only automate or ‘compress’ existing design workflows that are currently conceived by human architects in a top-down manner, but also, even more radically, to un-automate or ‘expand’ existing design thinking habits that are yet to be conceived, using AI in a new bottom-up manner,” says Prof Koh. He worked with London-based design practice Zaha Hadid Architects from 2011 to 2014 before joining SUTD in 2019 to teach what he says is the world’s first dedicated “AI x Architecture” module. On Design Brain, Prof Koh says: “The importance of this research lies in gaining a deeper mutual qualitative and quantitative understanding of the evolving gap between the intelligent architectural capabilities of AI and those of human architects. “At this point in time, it functions as a litmus test to see if an AI model can acquire the design skillsets of a human architect.”Analysing the ear canal through light imaging and machine learningAssociate professor Stylianos Dritsas worked with a team from Changi General Hospital to improve how hearing aids are made.ST PHOTO: CHONG JUN LIANGAccording to the World Health Organisation (WHO), customised hearing aids can help cut the rate of cognitive decline by nearly half in seniors at a higher risk of dementia. WHO says that nearly 2.5 billion people globally may have some degree of hearing loss by 2050, and more than 700 million will require hearing rehabilitation. To create a more accurate mould of the outer ear for hearing aids, ear specialists in Singapore led by Dr Kenneth Chua, formerly from Changi General Hospital (CGH), collaborated with SUTD associate professor Stylianos Dritsas from 2020 to 2023. Prof Dritsas teaches design computation and robotics, combining design, geometry, computing and manufacturing methods. He worked with the CGH team to improve the way hearing aids are made, using machine learning. This is a field of study in AI centred on the development and study of algorithms that can learn from data and go on to create scenarios without relying on explicit instructions.Fitting hearing aids typically involves injecting silicone into patients’ ears. This produces ear canal moulds which are then scanned using 3D imaging to create digital ear canal impressions. But before digital impressions can be used by doctors, the printouts require a lot of manual 3D editing. Prof Dritsas and his team at SUTD helped Dr Chua – now a principal audiologist at Mount Elizabeth Medical Centre – with computational methods to pre-process ear canal impressions. The project ended earlier in 2025, but there are plans for an extension to conduct more studies. The goal is to develop automated tools to help design better hearing aids. This will improve manufacturing and fitting the aids, and help standardise ear canal data for research purposes.Prof Dritsas points out that in traditional computer programming, tasks that require visual intelligence and a qualitative assessment are very difficult to express via simple rules fed into a computer. However, with AI, it is possible to provide enough examples for the computer program to understand the trends and patterns, and enable it to perform these tasks without explicitly being guided by a human.“The key finding of the ear canal project is that, unlike classical geometric analyses which require ‘clean’ or comprehensive data, the machine learning models are able to work with ‘noisy’ (incomplete) data from the real world,” says Prof Dritsas, 48, a founding member of SUTD. He has taught at renowned architecture schools such as the Architectural Association School of Architecture in London, England; and Harvard University in Massachusetts, the United States.“During the process, something unexpected happened,” he recalls. “The machine learning model scrutinised the team’s main printouts of the outer ear shapes thoroughly and was able to spot errors. This occurred because some of our data samples were off the general trend.“So, it provided highly intelligent guesses to improve the end-product. This is how AI radically improved the end-product by identifying ear canal characteristics.”Prof Dritsas says the machine learning model is a pioneering technology. One of its “superpowers” is the ability to make hyper-intelligent assessments for a final read-out that is more than 90 per cent accurate. “The model learnt something objective by combining subjective opinions of experts and collective knowledge from the team,” he adds. “This is something very powerful about machine learning that is not yet well understood, and which has enormous potential.”Breaking down complex concepts to share knowledge about coastal protectionProfessor Eva Castro with an AI-generated model of a prototypical project for coastal inhabitation in Vietnam. ST PHOTO: CHONG JUN LIANGProtecting coastlines from rising sea levels due to climate change is a top priority, says SUTD professor of practice Eva Castro from the Architecture and Sustainable Design pillar.As the director of the Centre for Climate Adaptation (CCA), housed within the Upper Changi campus, she uses AI to generate visualisations of projected scenarios faster than in previous years, when the technology was not so advanced.Instead of manually selecting chart types and visual encodings, AI-powered tools can automatically suggest the most effective visualisations to represent data and highlight key insights. This allows Prof Castro to ideate and iterate large-scale infrastructure proposals for coastal resilience that can be shared across borders to help stakeholders such as overseas researchers, scientists and government agencies.AI has also helped to break down complex design concepts for stakeholders to access and comprehend information that would otherwise be understood only by specialists or experts.“The CCA agenda is centred on archipelagos, so we have been working with coastal areas, developing a new understanding of the meaning of coastal resilience and the role the ocean plays within it, designing hybrid as well as nature-based solutions to inhabit possible futures,” says Prof Castro, 53.A presentation on coastlines by the Centre for Climate Adaptation. The centre is a collaboration between SUTD and US schools Pratt Institute and Pace University.PHOTO: CENTRE FOR CLIMATE ADAPTATIONShe is also the co-founder of research laboratory, formAxioms @ SUTD, which looks at future scenarios through virtual reality, augmented reality and extended reality projects.One of CCA’s ongoing research programmes is the Long Island project, which involves reclaiming land at higher levels off East Coast and placing them in the form of “islands” located a distance away from the existing coastline.The programme, aimed at helping Singapore’s efforts in coastal protection, is expected to reclaim about 800ha of land off East Coast.It will be progressively implemented from 2030 and will stretch over a few decades, according to national water agency PUB.Prof Castro says the aim of her work is to produce more inclusive processes of design that help in sharing knowledge and engaging society at large.“As designers, we want to promote the importance of our role within the infrastructural projects that define the cities we inhabit,” she says. “In doing so, we believe we are strategists who know how to operate across diverse scales and typologies.”Collaborative intelligence, instead of artificial intelligencePhD candidate Daryl Ho is exploring how extended reality, or XR, can augment the ways humans and AI interact. ST PHOTO: CHONG JUN LIANGArchitecture graduate and PhD candidate Daryl Ho believes AI will not take over the jobs of humans, but will be more of a reliable assistant to designers. Think collaborative intelligence rather than artificial intelligence, he says.The 30-year-old is working on a few “wonder”, or speculative, projects, which look into how the medium of extended reality (XR) can augment the way humans interact with AI beyond the standard user interface. The way humans engage with AI should not be limited to buttons and text prompts, he says. Through XR, users can make use of space for new forms of interaction and collaboration with AI.“One of my projects looks into how AI can be a creative partner that helps users design their own virtual environments,” says Mr Ho, whose research is in understanding XR through the perspective of an architect designing spaces for people to interact in. His goal is to find ways for people to curate their own spaces within the metaverse or virtual world, not only through appearance, but also via interactivity with the environment. In this project, AI collaborates with the users, responding to their actions through sounds, colour and placing design suggestions of its own. This allows designers within a virtual environment to create together with AI in a social, embodied manner.Another project looks at how AI can redefine how people interact with information, not as static elements taken from a webpage, but as dynamic agents that respond to the user and one another. “Much of the information we find online is presented to us through search engines, websites and online repositories that for the large part remain static 2D interfaces,” he says.“Through XR and AI, we can change the way a person receives and makes sense of information.”Imagine information as little AI helpers living inside a virtual world. Instead of just looking at words and pictures on a screen, one can interact with these AI helpers directly. This allows the individual to explore information in a dynamic 3D way, beyond the flat screen of the computer.“The metaverse is still a nascent force,” adds Mr Ho, highlighting that its implications make it an important future for designers to speculate about. If XR and AI become ubiquitous at a scale akin to that of the internet now, how does one engage with such spaces?“My research looks into tapping the principles of the metaverse to find novel ways in which we might be able to work, create and play with an ever-evolving digital landscape.” Designer and lifestyle journalist Chantal Sajan writes on design and architecture.Join ST’s Telegram channel and get the latest breaking news delivered to you.