Elevate Efficiency and Innovation with AI Tools
AI is rapidly evolving, and new tools and techniques are constantly being developed. In 2017 I wrote an article on what I thought could be some of the ways that artificial intelligence would impact our industry and the broader property industry as a whole, and it’s been incredible to witness the events over the last 6-9 months ever since ChatGPT was released onto the open market in late November 2022.
My prediction of 10-15 years for seeing machine learning dramatically impact architecture and planning applications seemed to have been shortened by at least ten years as it’s only been six years since the publishing of the original article, and we are now seeing widespread use of AI tools within the property development space.
How AI can be a powerful tool in 3D Visualisation
AI can be a potent tool for 3D visualisation due to some of the following reasons:
- Procedural Generation: AI techniques can be used to generate 3D content procedurally. This means that instead of manually creating every element in a 3D scene, AI algorithms can automatically generate objects, textures, landscapes, and even entire environments. This dramatically speeds up the creation process and enables the generation of vast and diverse visual content.
- Automated Animation: AI can automate the animation of 3D objects and characters. By leveraging machine learning and motion capture techniques, AI algorithms can generate natural movements, simulate physics, and create realistic animations. This saves time and effort in manually animating complex scenes.
- Data Analysis and Insights: AI can analyse large datasets of 3D visualisations to extract patterns, trends, and insights. This can be particularly useful in fields like architecture, urban planning, and industrial design, where AI can analyse data to optimise designs, identify potential issues, or simulate scenarios before physically building them.
- Interactive and Adaptive Environments: AI allows for creating interactive and adaptive 3D environments. By integrating AI algorithms, 3D visualisations can respond to user inputs, adapt to changing conditions, or simulate dynamic behaviours. This enables immersive and engaging experiences in virtual reality, interactive masterplans for large-scale residential subdivisions or simulation training applications.
- Optimisation and Efficiency: AI can optimise various aspects of the 3D visualisation pipeline, such as rendering, scene composition, or texture mapping. By employing machine learning algorithms, AI can reduce rendering times, optimise resource usage, and improve overall efficiency in generating and displaying 3D content.
As new tools are optimised and evolved, we will be seeing more and more AI-generated 3D visual content, along with work produced manually from our current lineup of 3D tools such as 3D Studio Max.
Current AI Tools for 3D Visualisation
Primarily aimed at graphic designers, the new generative feature within Photoshop allows the user to change a region or an entire image. By selecting specific areas, the generative architecture can embed new objects with proper lighting and shadows or completely change parts of the image. As the changes are non-destructive and add a new layer with every iteration, they’re pretty simple to modify but need a lot of work to get right and mostly end up with mixed results.
Here are some before and after images of its use.
The free beta version of this community-driven generative platform has changed to a paid version as the technology improved. In the earlier versions of the image generator, we noticed that human hands and other small details didn’t look right, but those errors have been corrected, and the output is fantastic for concept generation in extremely rapid succession.
It takes a fair bit of work to figure out how to generate the images – but we’ve had some success producing images for content marketing.
The greatest challenge with Midjourney, as with other AI tools, is to control the engine to produce precisely what is required. This is where the term “prompt engineering” comes into play, as the user needs to experiment and prompt the tool in various ways to obtain the desired result.
DALL-E and DALL-E2
DALL·E is an AI model that generates images from textual descriptions. It combines natural language processing and computer vision techniques to understand and generate coherent, detailed images based on textual prompts. It is based on a variation of the GPT (Generative Pre-trained Transformer) architecture, a bit like ChatGPT.
DALL·E can create novel and imaginative images, often incorporating objects and concepts not commonly found together. For example, given a textual prompt like “an armchair shaped like an avocado,” DALL·E can generate a realistic image of such an object.
The resolution of images produced from DALL-E is less realistic looking than from Midjourney – but similarly allows for rapid concept generation, and it doesn’t allow for a free trial to generate images, unlike Midjourney.
Unlike MidJourney and DALL-E, Stable Diffusion is an open-source AI platform. The system is currently in beta and is even more challenging to control than others. However, being free – it’s attracting a larger audience looking to experiment with GPT machine learning systems.
This brand-new tool is purpose-built for architects and designers we’ve just started exploring. By uploading an image to train the generative architecture, further prompts can be added to create elevations, floorplans, perspective drawings, aerial views and other graphic design elements to assist with the design process.
It has many options and reference images that can be used to style future-generated alternatives. We’ve been able to load up clay or white card images into the engine and create coloured-up versions in a style of Zaha Hadid or Bjarke Ingels as per the example below.
Whilst still in development, this tool promises to be extremely useful for architects – it is a plugin for SketchUp®, Revit® & Rhinoceros® to assist with concept generation inside the programs. Having only briefly experimented with it, the jury is still out on how much use it will get once it’s out of beta. But something else to keep an eye out on.
Where to from here
Predicting the exact advancements of AI in 3D visualisation over the next 10 years is challenging, as the field of machine learning is rapidly evolving. We’d be picking at straws to try and guess where it’s going to head – but below are some areas where we could see a lot of changes and impact.
- Real-Time Ray Tracing: AI algorithms may continue to enhance real-time rendering techniques such as ray tracing, enabling more realistic and immersive visualisations even in interactive applications like virtual reality and 3D content.
- Generative Models for Content Creation: AI systems like LookX and DALL·E have already shown promise in generating 2D images from textual prompts. In the next decade, similar models might evolve to generate 3D content, allowing for the automated and novel creation of 3D assets, environments, and even animations.
- Speech to 3D Model Creation: As AI input systems progress, we may be able to replace typed prompts with spoken instructions. This may be achieved in the near term and a lot quicker than 10 years away.
- AI-Driven Physics Simulation: Although most of the work in 3D visualisation is set, at the same time, AI techniques can be further integrated with physics simulations, allowing for responsive and realistic behaviour of objects and materials within 3D scenes. This could improve simulations of cloth, fluids, destruction, and other complex physics-based interactions.
- Enhanced Object Recognition and Tracking: AI algorithms can continue to advance in recognising and tracking objects in 3D space, enabling more sophisticated augmented and mixed reality experiences and seamless integration of virtual elements into the real world.
- Intelligent Scene Composition: AI can assist in automating the process of scene composition, helping designers and artists to quickly assemble and arrange 3D assets and environments with intelligent suggestions based on aesthetics, composition rules, and user preferences.
- Natural Language Interfaces: AI could enable more intuitive and natural language interfaces for 3D visualisation tools, allowing users to communicate their intentions or commands using spoken language, making the creation process more accessible and efficient.
- Deep Learning-based Animation: AI techniques like reinforcement learning and motion capture can advance animation capabilities, allowing for more realistic and lifelike movements of characters and objects. This could include improved facial animation, physical simulations, and natural human-like behaviors.
It’s important to note that these are speculative possibilities based on current trends, and the actual advancements in AI for 3D visualisation over the next 10 years may be different or even more groundbreaking. Typically, this technology is accelerating at the pace of hardware progression – so as Moore’s law hypothesises, as our computing power increases – so will the ability of AI systems to process greater amounts of data and deliver greater levels of realism.
To your development success,