By: Adam Contos
I stand in the room, shoulder-to-shoulder with the decision makers of this lucrative deal that is about to go through. Everyone has an agenda, and everyone has a boss to report to at the end of this. What I’m describing may sound to you like a boardroom during a high-stakes business negotiation, but I was standing in a room full of drug dealers. That scenario wasn’t from a movie; it was my life as an undercover narcotics agent. Before I ever set foot in a boardroom, or became CEO of a massive company, I served in the military. Later, I became an undercover narcotics agent then SWAT Commander, where I spent each day in situations like the one described.
One might wonder how the skills of a SWAT Tactical Commander and member of the military can provide you with an incredible edge when it comes to business. I went from kicking in doors to kicking off board meetings, trading in my tactical gear for a suit and tie. The military taught me discipline, operating within a proven system, and navigating a clear chain of command to ensure the objective is met.
This all led me to the world of franchising at RE/MAX, where I climbed the corporate ladder and became the CEO of a public company that supports nearly 9,000 franchise units and 140,000 agents in 110+ countries. Most recently, my journey led me to now growing and coaching leaders through Area 15 Ventures, a private equity/family office franchise growth engine. Through all these drastically different career paths I’ve embarked on, the mission remains the same: protect the people, serve the people, grow the people.
A part of protecting, serving and growing people comes with staying ahead of the innovation curve as a leader. With 30% of jobs in the U.S. expected to be fully or semi-automated by 2030, it’s no longer a question of when artificial intelligence will be completely ingrained in our society, but how. Albeit popular, there are many reasons why people adopt A.I. for the wrong reasons.
Don’t Follow A.I., Lead It.
Artificial intelligence is not a plug-and-play silver bullet. When used for the right reasons, it is a performance multiplier that shows good processes. The key here is that the operating system in question was optimized to begin with; A.I. just lends a helping hand to make it better and allows the human behind the screen to focus on bigger picture, complex company goals.
On the other side of the coin, A.I. can emphasize and highlight poor practices just as easily as good ones. If your data is messy and too broad, or your company culture is too vague, A.I. just consumes, processes, and accelerates that chaos. A.I. isn’t meant to make final decisions—that’s still up to humans. Use it to support your goals and strengthen your solutions, but if you follow it blindly, it can end up making the problems even worse.
Prompt-engineering (what you type in or ask the bot to do) and critical-thinking go together. These prompts should be used to educate the A.I. and keep the voice of output consistent and within the values of the company. Otherwise, it will give you a statement as boring and simple as the language you used to prompt it. You must coach it and mold it to do what you want rather than letting it take the reins or fill in the gaps with incorrect, regurgitated information.
For example, someone should first use an A.I. tool to help “seed” the situation and develop ideas, while also providing that tool with context and background on who’s asking the question and why, the culture of the organization, the brand’s methods of operating, and relationship standards for the customers and public to keep the brand voice and values consistent.
People-First Mindset: Using A.I. to Assist, Not Replace
For an optimized A.I. working experience, technology needs to ease the strain of a heavy workload, not replace or add to it.
I’m a fan of the A.I. First model, where smart technology helps germinate ideas and sharpen insights before they reach the table. But with Gallup reminding us only 21% of employees globally feel engaged, staying People First is just as vital. A.I. should support humans, not sideline them, by handling the busywork so people can focus on what truly matters.
Responsible Use and Written Communication with A.I.
I don’t know where this misconception first formed, but the output of an A.I. tool is never ready for publication, nor is it the bona-fide decision-maker. Treat this initial output as a draft, because that is exactly what it is! I use AI tools to help me create the final product, but I am part of it every step of the way, and I have final say. I do not use these tools to write entire business briefs, contracts, or essays for me, nor would I send anything output by the chatbot to a client prior to looking it over and fact-checking it first.
With this technology developing as quickly as it is, legislative regulations can’t keep up. There’s no federal or state-level precedent that businesses can refer to when deciding the usage and boundaries of A.I.
Thus, responsible use of artificial intelligence must be a standing agenda item for leadership, not an IT update. In fact, I don’t believe the IT department should own the A.I. program. They should be a part of it, but the executive team should take charge of the A.I. strategy collectively and guide it in a way that will best benefit their business and employees.
When it comes to responsible and practical use of A.I., maintaining company culture is key. Culture is just how we behave when no one’s watching.
When leaders embody curiosity, integrity, and a spirit of experimentation, their teams tend to reflect those values in how they apply A.I. But when leaders get distracted by flashy trends or accept sloppy data practices, the organization’s culture can decay as fast as the algorithms they run.