The AI applications I'm currently building are data heavy and therefore token-dense. Adding XML to the prompts versus Markdown means more token consumption, which really can be an issue for my apps.
Treating prompts as modular pipelines with XML enforcement is a smart approach! Turning brittle instructions into platform-neutral architectures could redefine cross-model LLM reliability.
I love the ideas here.
I do have an issue with XML.
The AI applications I'm currently building are data heavy and therefore token-dense. Adding XML to the prompts versus Markdown means more token consumption, which really can be an issue for my apps.
I understand your point about XML and the need for token efficiency, but here lies a key architecture tradeoff...
Deterministic capability and platform independence for your prompts, or speed and cost? Horses for courses as they say...
Treating prompts as modular pipelines with XML enforcement is a smart approach! Turning brittle instructions into platform-neutral architectures could redefine cross-model LLM reliability.