At school, I was good at maths and science. But from an early age, I was fascinated by these things called “computers” that were portrayed on TV and in books (this was the early 1970s in a South-East-Asian country, so they were not exactly household objects). I read up everything I could about them, but all the popular accounts were frustratingly short on detail. I was very impressed when some older boys built a rudimentary one (just a few lights and switches) for a science fair at school.
Then some school friends and I got membership at the British Council library in town. They had more books than I had ever seen before, including a full set of tne Encyclopedia Britannica. Alone of all the encyclopedias I had seen up to that point, the Britannica article on “Computers” actually had examples of proper program code! (It was in FORTRAN, but, hey, that was still fantastic to me.) In among all the sample statements, there was this:
N = N + 1
As I said, I was good at maths, and I knew what an equation was—enough to realize that this made no sense as a mathematical equation—there was no value of N (at least, no finite value) which would satisfy it!
But the key point was, in FORTRAN, the “=” denotes, not equality, but assignment. The statement means, “take the current value in the location denoted by N, add 1 to it, and put it into the location denoted by N”.
Once I had grasped this concept, I understood a whole lot more about computers than I had before.
Other languages from around the same time, designed by Proper Computer Scientists, used “:=” to denote assignment, leaving “=” to represent something closer to its mathematical meaning of equality. But unfortunately, the later popularity of C, which uses FORTRAN-style “=” for assignment, and invented another operator, “==”, to denote equality comparison, has probably meant that whole new generations of maths-savvy teenagers will have to go through the confusion I did.