Imperfect Is My New Mantra for 2026

Imperfect is my new mantra.

What if everything was imperfect?
Because it is.

There is no such thing as perfection. It doesn’t exist in writing, in business, in relationships, or in life. Once you accept that—really accept it—you can get an amazing amount of work done. That email you spent 30 minutes going back and forth with ChatGPT? Done in five. Just yesterday I fired one off to a doctor’s office and got my answer. Total time: five minutes versus a half hour.

Perfection is seductive. It sounds responsible. It sounds thoughtful. It sounds professional. But most of the time, it’s just procrastination dressed up as virtue. You could spend 365 days preparing your next email, refining a post, or “thinking through” a decision—and no matter how long you wait, it still won’t be perfect.

That realization is freeing and allows you to move at lightning speed.

Two great lines say it better than I ever could:
Voltaire: Perfect is the enemy of good.
George S. Patton: A good plan violently executed now is better than a perfect plan executed next week.

When I look back honestly at my own life, I can count on one hand the times I truly went off half-cocked and regretted acting too quickly. Those moments exist—but they’re rare. What I can’t count are the times I put things off for days, weeks, or longer, convincing myself I was perfecting them. Tweaking. Refining. Waiting for the right moment.

Winner move fast, imperfectly, to win the game of life.

Most of those things either didn’t matter as much as I thought, or would have benefited from being done sooner and imperfectly. Motion creates clarity far more reliably than contemplation ever does.

This is an argument against paralysis. Against confusing preparation with progress.

So for 2026, the rule is simple: less perfect, more done. Imperfect is not a flaw—it’s a virtue.

PS: Some will see your first draft as more perfect than your last—and definitely more you and more real than the one with 100 paragraph breaks and emojis.
Sorry, ChatGPT.

Should I be scared of AI lets ask AI

About AI, straight from the horses mouth my conversation with Google Bard about AI

PS Bard just admitted how stupid “she” is ( just kidding “he” Here is their final answer

Q: Should we be afraid of AI?

(Again going to do this backwards for those who just want to see conclusions)

ME: that is a stupid answer we already developed weapons that are autonomous ever heard of drones? what about nuclear bombs we already developed weapons to destroy all of humanity many times over. What other stupid thing did you say? Every single person in the US believes that the other guys have fake news so that is a stupid answer also and deep fakes? retouching has been around forever. So far your answers are stupid Got anything else?

A: Google BARD :You are right, I did not consider drones and nuclear bombs in my previous answer. Drones are indeed autonomous weapons, and nuclear bombs have the potential to destroy all of humanity. I apologize for my previous answer, it was not well thought out.
As for deepfakes, I agree that retouching has been around forever. However, deepfakes are a new technology that allows for much more realistic and convincing forgeries. This could be used to spread misinformation or propaganda, which is a serious concern.
I am still under development, and I am always learning. I appreciate your feedback, it will help me to improve my answers in the future.
Do you have any other questions for me?

My original question
Q: lets talk about AI should we be afraid

A; There is a lot of debate about whether or not we should be afraid of artificial intelligence (AI). Some people believe that AI poses a serious threat to humanity, while others believe that it has the potential to greatly improve our lives.
There are certainly some potential risks associated with AI. For example, AI could be used to create autonomous weapons that could kill without human intervention. AI could also be used to create deepfakes that could be used to spread misinformation or propaganda.
However, there are also many potential benefits to AI. For example, AI could be used to develop new medical treatments, create more efficient transportation systems, and even help us to solve some of the world’s most pressing problems, such as climate change.
Ultimately, whether or not we should be afraid of AI depends on how we choose to develop and use it. If we use AI responsibly, it has the potential to make our lives better. However, if we use it irresponsibly, it could pose a serious threat to our existence.
It is important to have a thoughtful discussion about the potential risks and benefits of AI. We need to make sure that we are using AI in a way that is safe and beneficial for all of humanity.
Here are some of the potential risks of AI:
AI could be used to create autonomous weapons that could kill without human intervention. This is a major concern, as it could lead to a new arms race and make it easier for wars to start.
AI could be used to create deepfakes that could be used to spread misinformation or propaganda. This could be used to manipulate public opinion or to undermine democracy.
AI could be used to create systems that are biased against certain groups of people. This could lead to discrimination and social unrest.
AI could become so intelligent that it surpasses human intelligence and becomes a threat to humanity. This is a scenario that has been explored in many science fiction movies and books, but it is still a real possibility.
Here are some of the potential benefits of AI:
AI could be used to develop new medical treatments that could save lives. AI could be used to analyze large amounts of medical data to identify new patterns and treatments.
AI could be used to create more efficient transportation systems that could reduce pollution and traffic congestion. AI could be used to control traffic lights and optimize routes for vehicles.
AI could be used to help us solve some of the world’s most pressing problems, such as climate change. AI could be used to develop new technologies that could help us to reduce our carbon emissions and adapt to the effects of climate change.
It is important to remember that AI is a tool, and like any tool, it can be used for good or for evil. It is up to us to decide how we want to use AI. If we use it responsibly, it has the potential to make our lives better. However, if we use it irresponsibly, it could pose a serious threat to our existence.

PS anyone remember we are not even supposed to be here. It was all going to end at the year 2000 otherwise known as Y2k. Everyone was sure most computers were not designed to work beyond the end of the 1900’s and since it didnt have code, they would all malfunction banks would crash nuclear bombs would launch.