Getting the best out of an adversarial project

This post comes from watching two very different vendors, on two very different projects, handle a familiar situation. A project hits issues, the mood turns, and suddenly the vendor is no longer the most popular party in the room. What stood out was the contrast in how each vendor responded, and more importantly, how one of them managed to turn things around.

There are many times in delivery work where you can find yourself on the wrong side of client sentiment, and quite often it has very little to do with the quality of your work. Budgets change, and what was previously seen as good value suddenly looks expensive. Internal restructures happen, and contracts that once made sense are now resented. People move on, political dynamics shift, and you may have been brought in by someone who is no longer in favour.

Sometimes the issue is entirely external to your delivery. A change elsewhere in the organisation can make a project seem less relevant or less innovative. It is not your fault, but it is still your delivery, and that means you will feel the impact.

In these situations, the reason matters far less than the response. As a vendor or consultant, you are being paid to navigate this, whether it is fair or not. The question is how you handle it.

One option is to double down on delivery. That might mean absorbing additional cost, adding resources, or simply pushing harder to ensure the outcome lands well. In fixed price environments, some of this should already be accounted for, but there are times when you have to take a hit. Larger organisations often don’t opt for this type of response purely on short-term financial grounds, i.e., this year’s bonus. That is understandable, but it can be short sighted. Reputation has real value, and people have long memories. It is worth weighing the reputational cost before reacting defensively. In many cases, it is better to grit your teeth and deliver.

Secondly, do not get drawn into internal politics. As a vendor, your role is not to take sides. Stay focused on both the letter and the spirit of your delivery. Turning overly rigid or contractual in tone can be just as damaging as open frustration. Clients are looking for a quality service, not just a strict interpretation of a contract.

There will always be occasional clients who push too far, but they are the exception rather than the rule. In most cases, if you provide something they can confidently take up the chain and demonstrate as progress, you will remain in a good position.

Another practical point is resourcing. Some organisations respond to struggling projects by quietly rotating out strong people and replacing them with those who will simply maintain the status quo. This rarely helps. If anything, it reinforces decline. If a project is under pressure, it is worth putting strong people on it, or at least ensuring visible senior engagement. Even if the issue is largely political, visible commitment matters. It shows that you are taking the situation seriously.

Thirdly, look to the future. Even when a project is difficult, it helps to position it as a temporary setback rather than a defining failure. Talk about what can be improved next time and how you can work better together. That sense of continuity and investment can shift the tone of the relationship. If you demonstrate that you are thinking beyond the immediate problem, clients often respond in kind.

Finally, be careful in how you offer solutions or retrospective insight. When relationships become strained, it is very easy to slip into criticism. Phrases that imply fault or hindsight superiority will only escalate tension. It is far more effective to frame things collaboratively. i.e “If we had known this earlier, we would have approached it differently, and we can take that forward to the next project”. The aim is to reinforce that you are working together, not against each other.

In practice, much of this comes down to restraint. You will sometimes be blamed for things that are outside your control. That is part of the role. The vendors who handle it well are the ones who avoid becoming adversarial themselves, focus on delivery, and keep an eye on the longer term relationship.

Once the immediate tensions pass, and they usually do, those behaviours are what determine how well you work together going forward.

“That’s Not My Responsibility” – The Battle Cry of the Silo

Although the title sounds cynical, this is not intended to be a cynical post. In my role across integration, security, and general problem solving in architectural and IT spaces, “that’s not my responsibility” or “that’s not our department” is one of the most common responses I hear. It is also, in many cases, a perfectly reasonable one.

So it is worth taking a moment to understand why this response exists, and then look at how to work through it in a practical and human way.

Why does this happen?

In my experience, this response is very rarely driven by malice or laziness. It is usually a rational reaction to long term consequences.

Taking responsibility for something in a corporate environment often means owning it indefinitely. Processes and systems can live for decades. What looks like a small favour today can turn into an ongoing obligation that requires budget, support, and accountability years down the line.

There are also risks. If something goes wrong, particularly where there are compliance or legal implications, the person or team that accepted responsibility may be held accountable. That is a serious consideration, especially if they were not involved in shaping the original solution.

The more complex or unclear an issue is, the more likely it is that ownership is undefined. At that point, it effectively becomes a hot potato. People are not avoiding work, they are avoiding inheriting long term risk without the ability to properly manage it.

So how do you get things done?

When you need something fixed or owned, and everyone is stepping back, there are a few practical approaches that tend to work. None are perfect, but each has its place.

1. Escalation

This is the blunt instrument. Sometimes you need to escalate to someone who sits across both areas and can make a decision.

It works, but it should be used sparingly. Overuse damages trust and can give you a reputation for frankly being a bit of a d*** or only caring about your side of any problem.

2. Providing budget or resource

This is often the most effective and least confrontational approach.

If a team is constrained by time or funding, removing that constraint changes the conversation entirely. Offering budget, a cost centre, or even additional resource can turn a “no” into “when do you want it”.

Clarity helps. “I can fund this, when can it be delivered?” is a very different discussion to “can you do this for me?”

3. Removing competing pressure

Sometimes the issue is not the work itself, but competing demands.

If you can help reduce noise around a team by handling or deflecting other requests, you create space for your work to be considered properly. It also demonstrates that you are contributing, not just demanding.

4. Taking responsibility yourself

This works more often than it should.

By formally taking ownership, even temporarily, you remove the long term risk for others. A simple, clear statement of responsibility can be enough to unblock progress, especially if it protects the other team from future audit or support obligations.

It is about giving people confidence that they are not inheriting unknown liabilities.

5. Formalising ownership

This is the most sustainable option, but also the slowest.

If responsibility is unclear, define it properly. Get agreement, document it, and ensure it is recognised at an organisational level. This allows teams to plan, budget, and defend their position in future.

It also creates a repeatable process. Once something has been formalised once, it becomes easier to apply the same approach again.

The most important part

Whichever route you take, there is one thing that matters more than anything else.

Clean up after yourself.

Do not leave behind unclear ownership, unmanaged risk, or hidden technical debt. Do not put someone in a position where they will be questioned months or years later without context or support.

Reputations in this space last a long time. People remember who made their lives easier and who made them harder. The industry is smaller than it appears, and those impressions carry forward.

Solve the problem, but leave things in a better state than you found them.

Never underestimate the per-user cost spike.

Never underestimate the per-user cost spike.

The full title of this post should really have been “Do not underestimate the cost per user on a small, well-planned project versus both a proof of concept and a ‘at scale’ system”, but that would have just looked silly.

One thing that is overlooked time and time again,        and eventually catches up with every project,        is that there is a jump in costs per active user when you go from a proof of concept to something that can work at scale before the advantages of a large-scale project starts bringing the costs back down.

Let me explain.

A proof of concept exists to answer a simple question. Will this work? Whether it is an application, a migration, or any other kind of system, particularly in a corporate environment, the aim is validation rather than longevity. The challenge comes when moving from that early validation to a solution that is robust, commercially sensible, and capable of operating at scale with a reasonable cost per user.

Between those two points sits a significant and often underestimated investment. It is the cost of building proper foundations.

This tends to appear most clearly in two common scenarios.

The first is the seemingly small project. For example, moving a limited number of users from one system to another, or implementing a narrowly scoped solution. The proof of concept is completed, the demonstration goes well, and confidence is high. Then the implementation costs are presented, and everyone has a heart attack. It only has ten users, so why does it cost so much?

The answer is that the cost is not driven by the number of users. It is driven by the need to build something correctly. Even for a small user base, the underlying architecture, security, supportability, and operational processes still need to be in place if the solution is to be reliable.

The second scenario is more subtle but ultimately more disruptive. A proof of concept is seen to work well, and the decision is made to move it directly into production. Initially, this can appear successful. It may run for months, sometimes even years, without major issues. However, without those foundational elements being in place, sooner or later things are going to go bang, and always at the least convenient moment.

When they do, they are rarely simple to fix. It is not a case of adding a little more budget or introducing a small process improvement. Instead, it often requires going back and rebuilding the core of the solution properly. By that point, the cost is usually higher than if it had been done correctly from the outset, as there is the added complexity of undoing or reworking what is already in place.

In effect, the cost has not been avoided, only delayed and now with interest.

When moving from a proof of concept to a production system, it is important to anticipate this increase in cost and to communicate it clearly to stakeholders. Likewise, when delivering something that appears small today but has the potential to scale in the future, the investment in solid foundations can seem disproportionate.

It is natural for that to prompt questions and resistance. However, it is worth being clear that reducing that investment rarely removes the cost. More often, it shifts it into the future, where it becomes more expensive and more disruptive to address.

The danger of corporate communications during the golden hour.

I have mentioned it before, but there is something that I like to call the golden hour in any multinational organisation. This is a particular time during the day when you can get the three major corporate time zones, 1 all on a meeting at the same time.

It is normally somewhere between 1:30 and 3:30 UK time. You get an awful lot of large-scale broadcast meetings during this window because it is the only period where everybody is technically available. What this really means, however, is that a lot of important things that people should be paying attention to are all happening at once.

At most clients over the last decade I have seen the same pattern. You will have multiple repeat meetings that you really do need to attend in conflict with lots of other ones of equal importance, plus lots of individual team standups for projects that are cross-region. You would think people would handle this sensibly. Do not book a meeting when people are not available. But a lot of the time the big meetings are aggregate meetings. For example, you might have a weekly meeting where all projects review cloud costs, or where all projects are invited to work through delivery timelines, outages, or similar topics. They pick the time where everybody could theoretically attend because there are fifty, sixty, or more people on the call. In reality, there is never a chance they could all attend, so the meeting is booked in the slot where, in theory, they could.

The inevitable result is that people miss things. Yes, there will be follow-up communications, but the crucial detail you actually need may be buried in a single PowerPoint slide. In a PowerPoint deck of fifty, sixty or seventy slides, attached to an already crowded inbox, you are going to miss things. When enough major communications happen in this way, multiple people miss the same information.

Unfortunately, this is how you end up with something going wrong and the inevitable question is asked, ‘Why was I not informed?’ The answer is usually that it was in an email somewhere.

This is simply a warning note. You either need to protect this time window so that only one or two major calls are scheduled into it, or you need to challenge the approach and ask for communications to be split into regional time zones. As a PM or manager, this is your danger zone. You have to watch it constantly and, stressful as it feels, try to track all the individual updates that might affect your delivery when they arrive as general blasts during this period.

  1. India, the US and Europe, []

Wild Adoption Vs Crippling Bureaucracy

This is a lesson from earlier in my career that feels painfully relevant to our current cloud environments.

With all of the cloud services now available, one of the biggest changes we have seen is just how many features are suddenly at everyone’s fingertips. In particular, when it comes to on-demand infrastructure, you can now build things in seconds that would once have taken months. That speed has produced a slightly nervous response from a lot of infrastructure, compliance and finance teams, and for good reason. You can get something into production incredibly fast, unlock a lot of demand, and just as quickly build up a very expensive bill without really meaning to.

Remember, one of Amazon’s core principles is to make it easy for people to give them money, and they with AWS and the other cloud providers, brought that mindset to cloud provisioning with real enthusiasm.

The problem is that there never seems to be a sensible middle ground.

In many large organisations, infrastructure services now make it cripplingly difficult to get anything done, often far harder than it ever was with on-prem services or specialist hosting. It feels like the only two options on offer are total freedom or total lockdown.

We have been here before.

The first example that always comes to mind is Microsoft Access. When people wanted space on SQL Servers and were denied back in the day, they used Microsoft Access and Excel instead. When they wanted development capability and were denied, they built it themselves. It became a running joke to judge how frustrated the business was by checking the file systems to see how many new Access databases had appeared and how large they had grown.

Lotus Notes followed a similar pattern. In the early days, users were given templates and just enough rights to create their own databases. Huge numbers of them appeared very quickly. Some became production systems, then the servers filled up and chaos followed. The response was to clamp down harder and harder on new databases and features, until eventually it became so difficult to do anything at all that the core reason for having Lotus Notes disappeared. At that point, you might as well have just used a decent email client. In the end, that behaviour helped cripple the platform.

SharePoint inherited many of the same issues, just with different tooling.

Businesses will always route around blockages. You cannot stop that. What I am seeing again now is the same failure pattern. Crippling bureaucracy is being applied to infrastructure. A new easy-to-use tool appears, and instead of guiding its use sensibly, it gets locked down. What happens next is entirely predictable. Make it hard to get a Salesforce site or proper support, and the business will simply go and buy a new tenant. The same is true of Azure and AWS.

You have to find the middle ground.

If you let people have whatever they want, it spreads like wildfire. You do not get good value for money, and people start building their own little empires rather than delivering value to the business. But if you go the other way and make it cripplingly difficult for the organisation to grow, expand, or even function, then that demand will leak out sideways into shadow IT and unofficial platforms. At that point, you have a much bigger problem on your hands.

So when you are planning your services, and planning how users can request and consume them, remember this. If you make it too hard, you are actively stifling the business. It may comply for a while, but it will eventually find another way. History has shown this again and again.