The danger of corporate communications during the golden hour.

I have mentioned it before, but there is something that I like to call the golden hour in any multinational organisation. This is a particular time during the day when you can get the three major corporate time zones, 1 all on a meeting at the same time.

It is normally somewhere between 1:30 and 3:30 UK time. You get an awful lot of large-scale broadcast meetings during this window because it is the only period where everybody is technically available. What this really means, however, is that a lot of important things that people should be paying attention to are all happening at once.

At most clients over the last decade I have seen the same pattern. You will have multiple repeat meetings that you really do need to attend in conflict with lots of other ones of equal importance, plus lots of individual team standups for projects that are cross-region. You would think people would handle this sensibly. Do not book a meeting when people are not available. But a lot of the time the big meetings are aggregate meetings. For example, you might have a weekly meeting where all projects review cloud costs, or where all projects are invited to work through delivery timelines, outages, or similar topics. They pick the time where everybody could theoretically attend because there are fifty, sixty, or more people on the call. In reality, there is never a chance they could all attend, so the meeting is booked in the slot where, in theory, they could.

The inevitable result is that people miss things. Yes, there will be follow-up communications, but the crucial detail you actually need may be buried in a single PowerPoint slide. In a PowerPoint deck of fifty, sixty or seventy slides, attached to an already crowded inbox, you are going to miss things. When enough major communications happen in this way, multiple people miss the same information.

Unfortunately, this is how you end up with something going wrong and the inevitable question is asked, ‘Why was I not informed?’ The answer is usually that it was in an email somewhere.

This is simply a warning note. You either need to protect this time window so that only one or two major calls are scheduled into it, or you need to challenge the approach and ask for communications to be split into regional time zones. As a PM or manager, this is your danger zone. You have to watch it constantly and, stressful as it feels, try to track all the individual updates that might affect your delivery when they arrive as general blasts during this period.

  1. India, the US and Europe, []

Wild Adoption Vs Crippling Bureaucracy

This is a lesson from earlier in my career that feels painfully relevant to our current cloud environments.

With all of the cloud services now available, one of the biggest changes we have seen is just how many features are suddenly at everyone’s fingertips. In particular, when it comes to on-demand infrastructure, you can now build things in seconds that would once have taken months. That speed has produced a slightly nervous response from a lot of infrastructure, compliance and finance teams, and for good reason. You can get something into production incredibly fast, unlock a lot of demand, and just as quickly build up a very expensive bill without really meaning to.

Remember, one of Amazon’s core principles is to make it easy for people to give them money, and they with AWS and the other cloud providers, brought that mindset to cloud provisioning with real enthusiasm.

The problem is that there never seems to be a sensible middle ground.

In many large organisations, infrastructure services now make it cripplingly difficult to get anything done, often far harder than it ever was with on-prem services or specialist hosting. It feels like the only two options on offer are total freedom or total lockdown.

We have been here before.

The first example that always comes to mind is Microsoft Access. When people wanted space on SQL Servers and were denied back in the day, they used Microsoft Access and Excel instead. When they wanted development capability and were denied, they built it themselves. It became a running joke to judge how frustrated the business was by checking the file systems to see how many new Access databases had appeared and how large they had grown.

Lotus Notes followed a similar pattern. In the early days, users were given templates and just enough rights to create their own databases. Huge numbers of them appeared very quickly. Some became production systems, then the servers filled up and chaos followed. The response was to clamp down harder and harder on new databases and features, until eventually it became so difficult to do anything at all that the core reason for having Lotus Notes disappeared. At that point, you might as well have just used a decent email client. In the end, that behaviour helped cripple the platform.

SharePoint inherited many of the same issues, just with different tooling.

Businesses will always route around blockages. You cannot stop that. What I am seeing again now is the same failure pattern. Crippling bureaucracy is being applied to infrastructure. A new easy-to-use tool appears, and instead of guiding its use sensibly, it gets locked down. What happens next is entirely predictable. Make it hard to get a Salesforce site or proper support, and the business will simply go and buy a new tenant. The same is true of Azure and AWS.

You have to find the middle ground.

If you let people have whatever they want, it spreads like wildfire. You do not get good value for money, and people start building their own little empires rather than delivering value to the business. But if you go the other way and make it cripplingly difficult for the organisation to grow, expand, or even function, then that demand will leak out sideways into shadow IT and unofficial platforms. At that point, you have a much bigger problem on your hands.

So when you are planning your services, and planning how users can request and consume them, remember this. If you make it too hard, you are actively stifling the business. It may comply for a while, but it will eventually find another way. History has shown this again and again.

The AI Adjustment

Preface.

Before I get going on the subject, I want to make a clear distinction. I am not talking about AI as a financial investment. As it currently stands, that side of things looks as if it is setting itself up quite nicely for a nasty correction, and it is already showing all the hallmarks of another ‘Tulips’ or .com moment. But I do not know enough about that world to comment in any depth. 1

What I am talking about here is the technological adjustment that will happen when the investment rounds in AI finish, the speculative phase ends, and the technology has to make solid money day after day.

As the media cycle spins on, we keep hearing talk of the AI bubble collapsing, which then becomes talk of the collapse of AI itself. Now I know I swore I was not going to write another thought piece on AI, but here we are.

I don’t think we are going to see a collapse of AI as a technology. We may well see the collapse of a few overextended companies, but what I think we are far more likely to see is a rationalisation of the excess of AI features.

Rather than the dramatic collapse that some people seem to want to hype up, I think it will look much more like what we saw with JavaScript. Those who recognise the word “Netscape” will remember when JavaScript first arrived in browsers and did everything. Then it was discovered that it did everything badly and rather messily, and it was wildly overused. It was clamped down on hard and fell out of favour for a while. Then people realised that actually, this kind of functionality was really useful, as long as it was applied properly, limited in the right places and optimised where it made sense. From there, it grew and grew until it became more popular than ever. I think we are heading for the same kind of correction with AI.

I also think that a lot of the components that make up what people think of as AI will become more visible in their own right. At the moment, most people only think of the big, flashy parts, like Large Language Models. I suspect that things like vector databases will become far more visible. Alternative database technologies will also continue to grow in popularity, particularly NoSQL. Orchestration and model routing will also become things that are simply built into other systems as standard.

What I really think will happen is that people will build applications and want the power that a full AI stack offers, but they will not want to pay full AI prices for everything. If your cloud costs are already around ten thousand a month, you are not going to want to add another ten to fifteen thousand just for the AI layer. You will want it to be cheaper, more targeted, and you will only want to pay for what it is genuinely worth.

Because of that, I think we will see a breakup of the big, all-encompassing AI platforms, leaving only a few players who can truly operate at global scale, such as Salesforce, OpenAI and Google. The rest of us will use smaller, more focused AI components to deliver the specific features we actually need.

I have mentioned this example before, but claims evaluation is still my favourite. You only need a very small language model and your own local data store to do a serious job of evaluating whether claims are fraudulent or not. You do not need it to hold a full conversation. You do not need chat prompts and all the engineering that goes with that. It is still AI driven, but it is focused and cost-effective.

I think we will see more and more of this approach as people take only what they need, rather than paying for vast, Agentic systems that do everything, cost too much, and deliver too little in return.

  1. Shares generally terrify me, as I am risk averse, which is why my total shareholding is 3 Games Workshop shares, which I mainly have for the amusement value. []

Agentforce World Tour, 4th December 2025

A smaller version of the normal Salesforce conference, and very definitely targeted at Agentforce. As a change of pace to the normal Salesforce conferences It was very much geared around practical work 1, and that seemed to be the core theme of the day.

It felt like we have now moved past the over-excitement phase of AI. Everyone is asking the same question: how do we actually make money out of this, and what do we really do with it past a fun proof of concept?

There were quite a few large-scale customer success stories pushing cost savings and value. Also a lot of practical coding sessions on how you actually implement things. One interesting thread ran through the day. Those who are rolling their own, or “DIY AI” as Salesforce calls it, were very much looked down upon as an unnecessary cost and a waste of effort.

The clear message was why bother when you can have prefabbed AI with a set of ready to use tools. Alongside that was a constant emphasis on trust and trustworthiness. A valid point and one that needs making, because if you package up AI and make it opaque on what is going on under the hood, then people really do have to trust the implementation.

My highlight of the day was the Compliance Centre. It is the first truly useful business implementation of AI that I have personally seen. Most other tools feel like clever technology in search of a problem, or thinly disguised cost cutting exercises. Even the usual line of “freeing people up to do interesting work” often feels forced. The Compliance Centre, however, is genuinely useful. Corporations have to do compliance. They hate doing it. The people tasked with it hate doing it. The people they enforce it on tend to hate them for doing it. It is a miserable necessity. So having a tool that takes large regulatory documents, even giant government-produced PDFs, turns them into practical legislation and enforces it inside your data is genuinely powerful. It is exactly the sort of task no one wants a human to do, and no human wants to do it.

 

I may be slightly coloured by the fact that I genuinely enjoyed the day. It was a smaller event, but I had good company, which always helps. It felt much more like an information dump showing what is coming, rather than a sales drive. They clearly knew that very few organisations had money left to spend at this time of year.

One thing I nearly forgot to mention was the whole “vibe coding” toolkit. It was demoed and had multiple workshops, but in honesty I have seen the same ideas implemented elsewhere in other development environments. Salesforce are largely catching up here. They might see it as a big step forward, but for many developers it felt more like parity.

Overall though, well worth attending. I did not expect much from it, and several colleagues and previous clients skipped it because they assumed it would be too small to bother with. I think they missed out. It was far less hyperactive than the main events and much more focused on practical delivery. I came away with useful tools, new knowledge and a surprisingly positive view of where this part of the platform is heading.

   

  1. , which I suspect is down to the practicalities of it being held in Quarter 4 of the financial year when no one has any money left []

Corporate term: “Gravy Train Project”

Definition

A gravy train project is a large, long-term initiative that attracts significant promised investment, often sanctioned by very senior decision-makers and spread over multiple years. These projects typically involve substantial spending, the rapid hiring of new staff, and the signing of large and expensive vendor contracts, often without clear short-term deliverables.

Explanation

A gravy train project usually begins with extensive hype. It is positioned as the next big thing and is supported by high-profile backing, whether from major corporations’ investment areas or government bodies. Marketing activity from vendors and internal stakeholders quickly follows, reinforcing the scale and ambition of the initiative.

From a delivery perspective, the outcomes are often framed as being years away. This removes immediate pressure for tangible results while releasing large amounts of funding into the system. As a result, spending accelerates rapidly. In many cases, this expenditure does not always align with effective delivery or long-term value.

Vendors and external partners are well aware that this type of funding rarely lasts indefinitely. As the money is expected to dry up eventually, there is often a rush to secure as much of it as possible while the programme still has momentum.

Real-World Context
One of the most visible examples in England has been HS2. There was an excellent interview with the former head of the Channel Tunnel rail link construction into London, where he contrasted the gravy train nature of HS2 with the way the cross-Channel connection was managed.

His key lesson was that large projects should not release all funding at once. Instead, investment should be announced and approved in tightly defined phases. While the overall programme may span many years, each phase should be funded only when the previous phase has delivered successfully. In effect, this turns one vast programme into a series of smaller, tightly controlled projects.

Why This Matters?
This phased approach limits uncontrolled spending, maintains delivery focus, and helps prevent the gravy train mentality that assumes unlimited money for unlimited activity. It also reduces the risk of large-scale failure, reputational damage, and the familiar situation where those who spent most freely are never held to account, while delivery teams are left to manage the consequences and the frantic panic as the funds run dry.

Disclaimer: As always these posts are not aimed at anyone client or employer and are just my personal observations over a lifetime of dealing with both management and frontline associates.