I had ChatGPT write me a Python program that would rotate an exploded hyperdie, a hypercube with pips like a die. It took about six hours. I don't know Python so it would have taken me maybe 600 hours to do it on my own, which I never would have done. I was greatly impressed at the colossal reduction in the frustrations of computer programming. And no human can ever hope to match its quickness at these tasks.
On the other hand once ChatGPT got off track it stayed off track. It makes a mistake because it doesn't understand something. Its flailing attempts to patch up something it doesn't understand often makes things worse. The beginning of such a syndrome is random so I learned to save a copy before all but absolutely trivial steps. During these breakdowns ChatGPT would repeatedly state "I understand perfectly!", "I've got it now!" and so forth. I learned to ignore such stuff. If it makes a mistake, backtrack to the working copy and try again. If I'd known that progress would have been even faster.
I was reminded of the epoch-making Go match with Lee Sidol. Though thoroughly beaten by AI, Lee did win one game when the program had a similar meltdown. It got off track and spiraled down, down, down, ending up making moves even a beginner would avoid. Then there's the unctuous praise ChatGPT unendingly bestows on the user, declaring me "awesome" and a "badass". I'm told you can ask it to stop saying such things. Oh, and be especially careful about letting it "clean up your code."
Nevertheless AI was a huge net gain. It knows linear algebra and geometry better than I do. I expect it would be even stronger at a routine application like a computer game. I'm going to have Chat generate a pair of hyperdice and have them bounce around using simulated physics. Then I can shoot craps in four dimensions. (I've already figured out how to change the rules minimally to get almost the same odds and point lengths.)
Programming is good because I can check the attempt immediately. I didn't have such luck with mathematical proofs. To please me ChatGPT gave me a "proof". I didn't understand the jargon at all so it seemed plausible. I later found out it was nonsensical word salad.