Fascinating concept, Tim, especially because you’re embedding yourself in the experiment! Alongside regression testing, it might be valuable to stress test leadership: observing how leaders respond under high workloads or constrained resources. With the increasing complexity of workplaces, edge and corner cases are becoming more relevant--not just in software but in human performance as well.
I’d also bring “unbossing” into the conversation, as more companies flatten hierarchies and phase out middle management. That shift makes it even more important to proactively support emerging leaders and identify those struggling early in the cycle. Unfortunately, skip-level reviews (when they happen at all) too often reflect org politics more than actual performance.
My question for you: Could AI tools help simulate or support your testing approach? I haven't used these tools myself, but am curious about tools like BetterUp, Leena AI, Lattice, and Eightfold AI.
I look forward to reading more about your experiment :-D
+1 on unbossing! You hit the nail on the head, this hopefully will encourage ICs to jump into leadership behaviors proactively.
Good question with regards to AI tooling!
- I think LLMs could help brainstorm for good ideas on identifying artifacts to verify your behaviors with. At best, you are allowed to use an LLM which you could feed information such as strategy documents to be more targeted.
- With AI doing the pass/fail criteria, I am not fully certain yet. I've seen certain LLMs pushing you into confirmation biases, you have to be careful. I'd argue using a trusted peer-manager or a coach to peer-review your criteria might be more effective for now but I can also see that, given you exhaustively specify your pass/fail criteria, it might catch 80% of the cases well! I'll actually take that with me and give it a few tries!
Fascinating concept, Tim, especially because you’re embedding yourself in the experiment! Alongside regression testing, it might be valuable to stress test leadership: observing how leaders respond under high workloads or constrained resources. With the increasing complexity of workplaces, edge and corner cases are becoming more relevant--not just in software but in human performance as well.
I’d also bring “unbossing” into the conversation, as more companies flatten hierarchies and phase out middle management. That shift makes it even more important to proactively support emerging leaders and identify those struggling early in the cycle. Unfortunately, skip-level reviews (when they happen at all) too often reflect org politics more than actual performance.
My question for you: Could AI tools help simulate or support your testing approach? I haven't used these tools myself, but am curious about tools like BetterUp, Leena AI, Lattice, and Eightfold AI.
I look forward to reading more about your experiment :-D
Thank you!!
+1 on unbossing! You hit the nail on the head, this hopefully will encourage ICs to jump into leadership behaviors proactively.
Good question with regards to AI tooling!
- I think LLMs could help brainstorm for good ideas on identifying artifacts to verify your behaviors with. At best, you are allowed to use an LLM which you could feed information such as strategy documents to be more targeted.
- With AI doing the pass/fail criteria, I am not fully certain yet. I've seen certain LLMs pushing you into confirmation biases, you have to be careful. I'd argue using a trusted peer-manager or a coach to peer-review your criteria might be more effective for now but I can also see that, given you exhaustively specify your pass/fail criteria, it might catch 80% of the cases well! I'll actually take that with me and give it a few tries!