But it worked and, for fans of prime numbers, that’s a plus. The process of going through ChatGPT to Wolfram and back was also painfully slow, much slower than using Wolfram Alpha directly or writing a few lines of Python. It didn’t generate any code, but provided a link to the Wolfram Alpha result page that described how to test for primality. And everything worked: ChatGPT sent the problem to Wolfram, it determined that number was not prime, and gave me the correct prime factors. I had to try this! Specifically, I was compelled to re-try my prime test. ![]() OpenAI recently opened their long-awaited Plugins feature to users of ChatGPT Plus (the paid version) using the GPT-4 model. One of the first plugins was from Wolfram, the makers of Mathematica and Wolfram Alpha. There’s a roughly a 97% chance that a randomly chosen 16-digit number will be non-prime. And the result itself–well, that could have been a good guess. When it became available, GPT-4 gave me similar results. After fixing some obvious errors, I ran the program–and while it told me (correctly) that my number was non-prime, when compared to a known good implementation of Miller-Rabin, ChatGPT’s code made many mistakes. It also generated a short program that implemented the widely used Miller-Rabin primality test. ![]() A few months ago, I wrote about some experiments with prime numbers. I generated a 16-digit non-prime number by multiplying two 8-digit prime numbers, and asked ChatGPT (using GPT -3.5) whether the larger number was prime. It answered correctly that the number was non-prime, but when it told me the number’s prime factors, it was clearly wrong.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |