"One of chief pieces of advice I give to aspiring rationalists is Don't try to be clever. And, Listen to those quiet, nagging doubts. If you don't know, you don't know what you don't know, you don't know how much you don't know, and you don't know how much you needed to know."
Eliezer Yudkowsky is an American AI safety researcher and writer. He co-founded the Machine Intelligence Research Institute and is known for work on rationality and AI alignment. He has published essays and technical writing on long-term AI risk.