I find it grating that all of these types of things say "LLMs" when in fact they literally only work with OpenAI. There are hundreds of variations of LLM models. When it works with only gpt-4-turbo or gpt-3.5-turbo, it's inaccurate to say it's a tool for LLMs in general.
True, Grammar has been around with Llama.cpp since the early days IIRC. Microsoft had Guidance as well & now TGI supports it as well. End game for Langchain/Llamaindex if just guarantee or structure adherence was the only reason to use & beg the LLM for usable output.