Recently, I have been thinking a lot about the Principal–agent problem. I honestly wish this was called the “stewardship problem” because I feel that it denotes better the ethical aspect of the issues.
Among other things, I have been asking myself what is the timeline that a steward should assume? And I think that, in the lack of other information, a steward must assume an infinite timeline of care for and exploitation of the systems/resources/whatever. That means steward must act to create conditions of sustainability even though steward himself doesn’t actually have an expected infinite timeline (unfortunately) and it may conflict with his or her economic self-interest.
Another question is who is responsible for designing the incentives? And I think it’s obvious that the principals are responsible for the correct, beneficial incentives designs. But designing perfect incentives is impossible and whatever rules we put in place, sooner or later, people (in general) will try to pervert them for their own benefit. I see no way out of this but to rely on ethics and self-respect as a greater good than economic self-interest.
Expanding on sustainability, in the software development that may mean:
- Engineering teams are maintained healthy and happy, with enough people to build, fix and operate the software.
- Knowledge is captured in multiple places and on a variety of media and disseminated, often, through both training and development challenges.
- Software development lifecycles are deliberately designed for maintainability, flexibility and velocity in order to support both exploration (innovation) and optimal exploitation over time.
- Continuous, honest and transparent assessment of return on investment on the software, its future prospects and conditions for its continued offering or eventual deprecation and shutdown.
Other thoughts:
- This problem happens on every delegation juncture, not just on the level of owners/boards/management and similar.
- AI Alignment problem looks like the same core problem (ensure interests are aligned) including the fact that agents can do dumb, self-destructive things that are not in their best self-interest either. With this claim I also dispense the fashionable obligation du jour of mentioning AI.
- And finally, a representative democracy also has the same problem: the agents (representatives) quite often are not aligned with what the principals (the people) want. That one really exposes how important it is to always (always) work toward sustainability of the overall system rather than one’s own self-interest.
Last modified on 2023-05-14