Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Geekflare
Geekflare
Keval Vachharajani

AI That Codes and Works: OpenAI Shows Next Steps for AI Workflows

OpenAI recently hosted a DevDay 2025 AMA (Ask me Anything), where the company shared a glimpse of what’s next for its AI ecosystem. One of the key highlights was on making AI agents more discoverable and usable within companies. The Agent Builder platform now allows teams to preview workflows, set up evaluations, and collaborate with stakeholders before exporting projects to the Agents SDK. It works with any model through LiteLLM support, and OpenAI plans to let developers deploy workflows as standalone APIs with webhook triggers. Multi-turn interactions will now retain context within threads, making agents more reliable for complex workflows.

The company also talked about its approach to high concurrency and durable execution. Developers will be able to handle rate limits more gracefully, and workflows executed by OpenAI could soon resume automatically if interrupted. The Agent Builder runtime will support multiple context management strategies, helping agents decide which inputs to prioritise for better performance in extended conversations.

For coding-focused developers, the Codex CLI and SDK will stay as a central part of OpenAI’s roadmap. The company reassured users that Pro-level limits will remain, and the CLI continues to be positioned as a full-time coding assistant. Updates in the pipeline include improved sandboxing, permanent command allowlisting, and better Windows support. OpenAI is exploring autonomous loops where Codex can run API servers, test code, edit, and repeat tasks with minimal human intervention. Codex is also being used inside Agents SDK workflows by multiple customers, hinting at a future where coding and general-purpose agents work seamlessly together.

Apart from that, model access and performance were also discussed. GPT-5 Pro will be available in Codex via ChatGPT accounts, with options for faster execution using medium reasoning or GPT-5-Codex. While the company has considered ways to speed up models, no concrete plans were announced. Developers will still have access to reliable tools for complex coding tasks while balancing speed and reasoning requirements.

Finally, OpenAI also shared updates on the Sora 2 API, which is now available globally wherever the OpenAI API is live. While the Sora app remains limited to the US and Canada, the API allows developers to use image references to guide video generation. However, arbitrary input image sizes require pre-processing. The company has also taken care of safety by preventing the generation of real public figures or input images with real human faces, but realistic fictional characters can be created.

So all in all, OpenAI’s AMA session highlighted OpenAI’s focus on building a more connected and developer-ready ecosystem. From persistent, multi-turn workflows to global access for creative tools like Sora 2, the updates show how the company is moving toward a future where AI agents, coding assistants, and content-generation models work together in a more integrated and practical way.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.