There are a few functional programming languages (like Agda, Coq, and Idris) that take a radical step: their compilers will attempt to prove that a given program eventually terminates, and if it can’t prove this, that’s a compiler error.
“But wait!” you say. “What about programs that must run forever?” These languages have an answer to this problem, but it’s a bit beyond the scope of this post. I might write about it in the future; if you’re interested, look up codata.
No, it’s another “problem” with termination checking that I want to talk about. Some of you are doubtless familiar with the famous halting problem — and the fact that it is undecidable. And termination checkers for languages like Agda and Coq don’t try to solve the halting problem — instead, they’re conservative. There are a couple techniques you can use to try to determine whether a program halts, techniques which will never give you a false positive — never tell you that a program halts when it runs forever. Agda’s termination checker tries a bunch of these techniques, and if none of them work, then it assumes the program doesn’t terminate and signals a compile error. This situation is often described as “Agda has a very powerful type system, but it’s not Turing complete. It gave up Turing completeness in exchange for being really really sure about correctness.” As if this is a drawback of focusing so hard on correctness.
I used to feel this way too — that Agda might be a cool language, but it wasn’t Turing complete, so it is ultimately useless, a thing that belongs locked away in some ivory tower. But actually, if you think about it, Turing completeness is not all that big a deal.
Yes, you want a programming language to be more powerful than a calculator. In fact, you want it to be very powerful indeed — you want it to be powerful enough to express anything you’d ever want to write.
But now I’m going to make a bold claim. I don’t think you’ve ever wanted to write a program for which you can’t know whether it terminates. I don’t think you ever will want to, or need to, write such a program.
You might want to write programs that loop infinitely. Agda can do that. What it can’t do is express programs that loop infinitely without accomplishing anything. I don’t think that’s very important.
Showing that a language is Turing-complete is a good, convenient way of proving without a doubt that the language is powerful enough for our purposes: it can represent anything representable, so even if it’s not good enough, we can’t do any better. But I’m not convinced that a language must be Turing complete in order to be worthwhile, even as a general-purpose language. “Turing space” is full of incomprehensible madness that is no use to us anyway, so why not “give up” the ability to express those nonsense programs in exchange for a more helpful compiler?