Future of Software Technology Singapore 2026: AI Scale Will Be Won in the Foundations, Not the Demos

AI
AI Landscape Powered by FOST

I attended Future of Software Technology Singapore 2026 today, part of the wider FOS Federation, which brought together API Days, Green IO, AI Collective Day, GraphQL Day, and the OpenAPI Initiative under one roof. That structure mattered. This was not a narrow conference about one tool, one model, or one vendor narrative. It was designed as a cross-disciplinary environment where APIs, AI, governance, architecture, sustainability, and engineering practice could be considered together. That felt right for where the market is now. We are no longer in the stage where AI can be treated as a standalone innovation stream. It is now colliding with the realities of enterprise systems, software delivery, security, and operating model design.

I attended four sessions in particular: the opening remarks by Mehdi Medjoubi, Aki Ranin’s session on connecting AI agents to business, Manjunath Bhat’s keynote on the future of software engineering, and Ram Tallavajhala’s session on why API governance determines agentic success. Looked at together, these sessions formed a coherent storyline. The market is moving quickly toward agentic systems, but the real differentiator will not be who can launch the fastest pilot. It will be who can build the most trustworthy, governable, and economically viable foundation for those systems to operate at scale.

1) Opening remarks, the real theme was not just AI, it was convergence by Mehdi Medjaou and Jon Scheele

Mehdi Medjaou’s opening remarks did more than welcome people to the event. They framed the conference as a federation of related disciplines, with API Days sitting alongside Green IO, AI Collective, GraphQL Day, and the OpenAPI Initiative. That setup sent an important signal. The future of software is no longer a series of separate conversations. APIs, AI enablement, governance, software architecture, and ecosystem collaboration are increasingly part of the same discussion. Even the event design, one ticket across all co-located conferences, was meant to encourage cross-pollination rather than silos.

What also stood out was the emphasis on community, practical exchange, and real-world feedback. This was not positioned as a hype showcase. The organisers explicitly highlighted networking, industry case studies, and practitioner exchange across sectors, including financial services, government, telecoms, and healthcare. In a market flooded with AI announcements, that matters. It reflects a healthier instinct, less fascination with abstract capabilities, and greater interest in how things are actually implemented in organisations with legacy infrastructure, governance obligations, and operational complexity.

There was also a subtle but important point in the way the API theme was framed. Jon Scheele noted that APIs have always been part of the fabric of software and technology, but are now even more central because organisations increasingly need an API layer to connect skills and systems for agents. That may be one of the clearest summaries of where we are. For years, APIs were seen as plumbing. In the agentic era, they become a strategic control surface. They are no longer just integration assets. They are part of how intelligence gets grounded, secured, orchestrated, and governed inside the enterprise.



2) Aki Ranin, connecting AI agents to business starts with taxonomy, architecture, and economics

Aki Ranin’s session was one of the most useful because it brought structure to a space that is currently overloaded with loose terminology. Rather than treating “agents” as a single category, he broke them into categories: prompt agents, workflow agents, and more unbounded agents, moving toward broader autonomy. That distinction is not just academic. It has immediate implications for enterprise adoption. Prompt agents are still heavily human-directed. Workflow agents operate with bounded autonomy and are much more relevant for real business deployment. Unbounded agents may be more ambitious, but they also increase risk, complexity, and the need for stronger controls.

That matters because many enterprise conversations on agentic AI are still too vague. Teams talk about “using agents” as if the category itself is the strategy. It is not. Aki’s taxonomy is a useful reminder that the first strategic question is not whether you have an agent strategy, but what kind of agency you are actually prepared to operationalise, govern, and pay for.

His second strong point was architectural. He emphasised separating chat interfaces, agent layers, and infrastructure, while also expanding the ways agents can access enterprise information, whether through APIs, retrieval-augmented generation, or direct interaction with data environments such as warehouse agents. This is where the conversation becomes more serious. The future enterprise AI stack is not just a model plus a user interface. It is an orchestration problem. It requires decisions about where reasoning sits, how context is injected, which tools are callable, and what boundaries govern those calls.

Aki also brought the economic reality back into view. One of the most useful parts of the session was the warning against large, monolithic agents that are expensive, difficult to optimise, and hard to govern. He pointed to the value of specialised subagents, dynamic context selection, and stronger evaluation discipline, including prompt optimisation and task-level measurement. This is exactly the kind of practical framing enterprises need right now. Too much of the market still treats bigger, broader, more autonomous systems as inherently more advanced. In practice, modularity, narrower task design, and tighter evaluation may be the smarter path for most organisations.

His discussion of sovereign AI also deserves more attention than it typically gets. The important question was simple: who controls your agents, and can they be turned off externally? That takes the discussion out of the comfort zone of product demos and places it in the real terrain of resilience, geopolitical exposure, supply chain dependency, and strategic control. For organisations in APAC, especially, that is not a theoretical issue. It is becoming part of the long-term architecture conversation.

My biggest takeaway from Aki’s session was this: connecting AI agents to business is not mainly a model problem. It is a design problem, an architecture problem, an evaluation problem, and increasingly a sovereignty problem. If those layers are weak, the “agent” is just a fragile abstraction sitting atop enterprise complexity.




3) Manjunath Bhat, the future of software engineering is not less human, it is differently human

Manjunath Bhat’s keynote was probably the session that pushed the conversation furthest beyond the usual automation narrative. He began with a deceptively simple question, what would you build if you could build anything? Behind that question was a much bigger shift. If the cost and speed barriers to software creation continue to fall, then the scarcity moves. The real constraint becomes not coding capacity alone, but judgement, imagination, prioritisation, and the quality of decisions about what should be built in the first place.

This is where his argument became especially sharp. He pushed back on the idea that AI means fewer developers. His view was almost the opposite. Faster development creates more competitive parity, which in turn increases the demand for differentiated software. That means more need for people who can direct, shape, and evolve what gets built. In his framing, software engineering does not disappear, but it does get redefined. The value of the human shifts from execution alone toward executive function, creativity, direction, and decision-making. That is one of the more important reframings I heard today.

His concept of “tiny teams” also stood out. Rather than large human teams, he described a future in which small teams, sometimes as lean as a product owner, a designer, and an engineer, are supported by AI agents. In that world, the platform team becomes even more important because autonomy without governance leads to chaos. That point connects strongly with what we are seeing across industries now. AI may enable smaller operating units, but it simultaneously increases the need for shared standards, architecture, oversight, and internal platforms that provide consistent leverage.

Another strong idea was the shift from “definition of done” to “definition of good enough” in agentic systems. That is more than a catchy line. It recognises that when software includes probabilistic components, feedback loops, evaluations, and observability become part of the product discipline. Agentic systems are not simply coded and completed. They are monitored, tuned, evaluated, and continuously improved. That is why his emphasis on an Agent Development Lifecycle, ADLC, felt important. It gives organisational form to a future in which building agents require a more explicit operating discipline, not less.

What resonated most with me, though, was his insistence that software is increasingly a vehicle for business model innovation, not just automation. That is exactly the level at which more enterprise conversations are needed. Too many AI discussions are still trapped in productivity rhetoric. Productivity matters, but it is not enough. The more strategic question is what new capabilities, new operating models, and new customer value propositions become possible when software, APIs, and AI agents are designed together.

4) Ram Tallavajhala, agentic success will be decided by governance, not enthusiasm

Ram Tallavajhala’s session brought the hard truth that every organisation currently rushing into agentic AI needs to hear. The more powerful and autonomous these systems become, the more dangerous it is to build them on weak API foundations. He described the shift from deterministic software to probabilistic systems, and that distinction matters enormously. Traditional applications generally follow known paths. Agents do not. They pursue goals, discover routes, interact with multiple systems, and make calls across a more open surface area. That is precisely why weak governance becomes a risk multiplier.

His critique of shortcut culture was especially relevant. Under pressure from boards, markets, and internal leadership, many teams are trying to move faster by skipping documentation, security, metadata discipline, and governance. Ram’s point was simple and brutal: what begins as a shortcut can become a pattern, then a behaviour, then a standard. That is how technical debt compounds. And in an AI-enabled environment, that compounding happens much faster than before. He argued that what once took years or decades to accumulate can now be built in a matter of months.

The most important part of his session was the explanation of why the agentic layer amplifies existing problems. A badly governed API layer is already a liability. But when probabilistic agents are placed on top of that layer, the problem becomes more severe because the agents will search for ways to complete their tasks, choose among available interfaces, and exploit the path of least resistance. Shadow APIs, inconsistent authentication, poor documentation, weak metadata, and incomplete controls all invite failure. In that world, governance is not administrative drag. It is operational protection.

His examples made the point tangible. If two APIs achieve similar outcomes but one is less restricted, an agent may favour that route. If documentation is poor or controls are inconsistent, the agent may still attempt the call. If sensitive systems are exposed through weak interfaces, the consequences extend beyond technical debt to security exposure and business risk. That is exactly why the current enterprise conversation about agents cannot be separated from API maturity.

I also liked his framing of the commercial argument. He urged teams to stop selling “tech debt cleanup” as an isolated hygiene project and instead frame the work as AI readiness. That is smart. Many organisations will not fund cleanup for its own sake, but they will fund foundational work when it is clearly tied to making AI safe, scalable, and production-ready. That is a practical lesson for anyone trying to drive change internally. Governance often fails to get traction when it is presented as a restraint. It lands better when it is presented as the thing that makes sustainable speed possible.

My broader takeaway

What stayed with me after these four sessions was how tightly the themes were connected. The opening remarks framed a world in which APIs, AI, engineering, and governance are converging. Aki showed that agentic AI only becomes meaningful in business when taxonomy, architecture, cost control, and sovereignty are taken seriously. Manjunath argued that software engineering is being redefined around creativity, judgement, and new operating models, not simply code generation. Ram then made the necessary counterpoint, none of these scales if the underlying API and governance foundation is weak.

To me, that was the real value of attending FOST Singapore 2026. It was a reminder that the enterprise AI conversation is finally maturing. The centre of gravity is shifting away from surface-level novelty and toward the harder questions: architecture, workflow design, evaluation, control, observability, security, and trust. That is where the serious work now sits. And frankly, that is where it should sit. Because in the enterprise, AI value is rarely destroyed by lack of imagination alone. It is usually destroyed by weak foundations, poor operating design, and the belief that velocity can substitute for discipline.

For me, this event reinforced what I already strongly believe through the AIdeate Solutions lens. The future will not belong to organisations that adopt more AI. It will belong to those who can connect AI to business reality with the right architecture, governance, human judgment, and a trust layer around it.

Jamshed Wadia

Business and Marketing Advisor @AIdeate | Advisory Board @CMO Council | AI Ethics & Governance @Mavic.AI | Startup Mentor @Eduspaze & @Tasmu | MarTech & AI Practitioner

https://aideatesolutions.com/
Next
Next

Trust Is Now an AI Discovery Layer. Trustpilot Just Made That a Product.