Your original metaphor is good and basically correct. Still, when you go to your favourite stationery shop to buy a greeting card, they most likely bought it from a supplier who licensed work from real artists to design their cards. Augmented code is the artist using AI to do part of the work. Of course mass-produced cards are not as personal as writing your own, nor as flexible, but they are still finished products created by artists who cared about their craft behind the scenes.
It is different from asking a model to write the card for you. It is different from using some vibecoded app to generate cards for you. Even if mass produced, there were humans in the loop who knew what they were doing.
Now imagine a three-year-old selecting a greeting card without really knowing how to read, parroting things that make him sound smart and capable, not fully understanding the occasion or context of the card, or even what is appropriate in general. Now, beyond whatever personal info is necessary for selecting the card and writing a message on your behalf, you also have to trust him with your credit card details, your name, address, and the data of the person you plan to send the card to. You would be telling the three-year-old to drive to the shop, buy a card, customise it and ship it for you. This is slop, and slop is dangerous. It may sound exaggerated, but a lot of people vibecoding are the equivalent of three-year-olds who cannot read in terms of software engineering. And the users of the Vibecoded Slop Greeting Card Generator GPT are trusting the tools and processes these folks are putting together. It is not about the end result that the user might be able to verify, but about everything happening behind the scenes and the trust required to grant the privileges and data the slop-generating machine needs to perform all the intermediate steps, many of which you cannot check. All this happens in a world full of smart people ready to exploit a naive unsupervised three-year-old while he does all the heavy lifting for you. And at the end of the day, the question is: do you really want to trust the three-year-old with enough information to do this task for you? If you ask me, no, I really don't. And even if they make the three-year-old smarter, I still would not use him like this. I will buy the card myself, help the shop owner make a living, keep the artist employed, keep my credit card in my wallet and the recipient’s information to myself. I will also keep my brain well trained enough to write a customised message of my own. I might perhaps use the three-year-old to proofread my message.
Augmented code is like the artist who knows what he is doing and can fully verify the results telling the three-year-old to paint something red. If the three-year-old gets it wrong, the artist will take the time to either correct it or make the child redo it. The task is smaller in scope and commanded by people who can verify and fix mistakes. Ane most of people that should be using these tools are not impressed with them so much as worried about how other people are using it and the consequences of that. Hopefully my position makes sense.