Connectionsim, an early wave of artificial neural network research, promised to tackle AI’s embarrassing tendency to fail catastrophically due to minor random flaws in the code or input data. Indeed, modern multi-layer neural network designs seem to exhibit “graceful degradation” in many aspects of their operation. Yet, if you push them enough you’ll get them to fail miserably, just at a higher semantic level.
Here’s an example of a brief interaction I had with ChatGPT. Notice how confident the answer sounds. Indeed, there’s depth to it. Unfortunately, the knowledge system is hapless as to the fact that it has landed in the wrong domain. It mixed up one famous name (Nikolai Bernstein) with another one (Leonard Bernstein). It also mixed up statistics with neuro/movement science; ultimately it gave a good answer to the wrong question. That’s not how humans would respond typically. If they knew enough about the topic of the question they would at least know the names and domains. In conclusion, we can conclude that ChatGPT hosts a rich body of knowledge but its retrieval of context can fail as catastrophically as we would expect from any of the earlier computer expert systems.
Q: What is the degrees of freedom problem according to Bernstein?
A: The Degrees of Freedom problem, also known as the “Bernstein’s problem,” is a statistical issue that arises when trying to estimate the parameters of a multivariate probability distribution using a limited sample size. The problem is that the number of parameters … first identified by the statistician Leonard Bernstein in the 1930s.