Dynamic Business Logo
Home Button
Bookmark Button

Credit: Mohamed Nohassi

4 key trends shaping how SMEs interact with generative AI 

One year on from the public launch of ChatGPT, nearly one in two Australian technology executives expect to increase their AI investments by more than 25% over the next year.

According to a recent joint report by MIT and Databricks, AI investment has stormed to the top of the priorities list – despite cost-cutting measures across many businesses – due to its potential to increase efficiency and generate new revenue streams. 

Where year one was generally a year of exploring and experimenting with gen AI, year two is set to be a year of more use cases in production across various departments within organisations, more sophisticated AI implementation strategies, and further innovation in the AI realm itself. 

Here are four key gen AI trends for businesses to watch out for in 2024:

Use case-driven implementation

In 2023, an astounding 98% of Australian enterprises were using gen AI in some way, with 30% adopting it, and another 68% experimenting with it, according to the MIT report. There’s no denying that organisations across the board have felt a sense of urgency to adopt the technology within their businesses, regardless of the use case. In 2024, however, there will be a greater focus on finessing the practical use of AI with emphasis on the business problems gen AI is able to solve. 

Naturally, organisations that have had a head start in experimenting with gen AI in 2023 will be in a stronger position to determine which areas of business will most likely experience the greatest benefit of deployment. For instance, banks such as NAB have applied AI to speed up productivity for its bankers so they can spend time building and deepening relationships with their customers – time consuming work like writing memorandums on companies can now be done with gen AI, drastically speeding up the loan approval processes. On the other hand, for areas such as health and government, which also grapple with strict regulatory pressures, we might see more novel use cases emerge. 

AI model differentiation

Moving into 2024 and beyond, companies will not only work to decipher how they will use large language models (LLMs) and gen AI, but also determine what kind of model they need and how they roll it out.

Key considerations will include whether to choose a large or small model and whether it is better to build internally or buy a pre-existing model that can be fine-tuned. Some companies will stick with large models while others might find that investing in smaller models is a better ROI. Some companies will find they do need the LLMs trained on billions of parameters, but still fine-tune those models internally to ensure they are getting the most out of that significant investment. 

These factors would all depend on the specific use cases that organisations have in mind. For example, an online retailer that wants to build a chatbot for their e-commerce site might employ a small model but then train it with huge amounts of data, ensuring high performance and the ability of the model to answer questions with relevant product information. In this instance, choosing a smaller model would be more effective in terms of both cost and results compared to a large model which is trained on general public data. 

Regardless of what models organisations choose, agility should be a key consideration. Given how fast the space is evolving, the overall architecture in place should allow for businesses to experiment quickly and swap out specific models easily if need be. 

No code/low code uptake acceleration

Gen AI presents the opportunity to democratise access to AI processes and enable all teams — tech or non-tech — to reap the benefits of data-driven insights within their organisations. Gen AI-enabled tools will help low-code-turned-no-code, the move to reduce or eliminate the need for traditional developers who write code, live up to its full potential and will increasingly remove barriers to data access.

One important trend we will witness is the ability of employees to interact with these AI tools in natural language. Simply put, users will be able to type in their desired outcome in plain English, eliminating the need for knowledge of coding languages such as SQL or Python, radically democratising access to data and AI. 

A rising focus on AI regulation and governance standards

As AI technologies evolve and become more ubiquitous, national and international regulators will adopt a more stringent way of assessing how the technology is used and its societal impact. We already saw this play out in the second half of 2023 with developments such as the European Union moving towards implementing an AI regulatory regime. In 2024, we can expect regulatory debates to intensify and more guardrails to be implemented. 

For business decision-makers, this will mean greater governance pressures to consider when deploying AI. And while governance standards will continue to evolve, effective AI governance can broadly be marked by accountability, standardisation, adherence to regulations, quality assurance, and transparency. These elements are crucial for optimising the value of AI in a secure and ethical manner, while also mitigating potential regulatory, legal, and reputational challenges. 

To ensure that AI potential is maximised while deployment risks are abated, businesses must ensure that they streamline their data and AI governance. By relying on a unified data platform, an organisation can avoid the complexities and risks that come with working with disparate platforms. For example, data intelligence platforms built on top of the lakehouse architecture, which combines data warehouses and data lakes to simplify data storage and usage, allow for enhanced governance and privacy as they are capable of automatically detecting, classifying, and preventing misuse of data while simplifying management through natural language processing.

These trends emphasise a growing focus on solving specific business problems, leveraging early experimentation advantages, and democratising AI processes. In essence, they serve as critical compass points, directing businesses towards a future where innovation, efficiency, and accessibility define the strategic adoption of AI technologies.

Keep up to date with our stories on LinkedInTwitterFacebook and Instagram.

What do you think?

    Be the first to comment

Add a new comment

Adam Beavis

Adam Beavis

Adam Beavis is the Vice President and Country Manager for ANZ, Databricks

View all posts