I found it ironic when viewing the live demo on stage that the examples produced didn't have "FQCN"s (full names for modules, like ansible.builtin.copy), but rather shorthand names (like "copy") even though Ansible's lint tool will complain if you use the latter.
That's likely because the training data is Ansible Galaxy, and the vast majority of existing code doesn't use FQCNs. I still prefer not to, since it adds visual clutter to the playbook (not to mention a tiny bit of extra typing even with autocomplete), and playbooks work exactly the same 99.999% of the time.
Someone mentioned they were considering adding more lint-friendly inputs into the model, but as with all things AI, the way it's done could badly affect the output, too.
I still don't feel like there's a ton of value from any AI-driven code completion tools... writing out code (especially the initial bit) is often the first 1% of work when venturing out into new programming or automation work.
That's likely because the training data is Ansible Galaxy, and the vast majority of existing code doesn't use FQCNs. I still prefer not to, since it adds visual clutter to the playbook (not to mention a tiny bit of extra typing even with autocomplete), and playbooks work exactly the same 99.999% of the time.
Someone mentioned they were considering adding more lint-friendly inputs into the model, but as with all things AI, the way it's done could badly affect the output, too.
I still don't feel like there's a ton of value from any AI-driven code completion tools... writing out code (especially the initial bit) is often the first 1% of work when venturing out into new programming or automation work.