Category: Content Strategy

Practical guidance on building effective content programs. Writing, editorial processes, and structuring content for visibility across search, generative AI, and thought leadership platforms.

  • Spreadsheet for measuring and tracking how well you are doing in AI searches

    Spreadsheet for measuring and tracking how well you are doing in AI searches

    What I have now heard from a couple of recent clients, is the importance of actually tracking progress in AI search. As with any parts of marketing you want to creating great messaging material, content and so on; but you also need to know whether that effort is actually working or not.

    I use a simple spreadsheet for doing this, attached below:

    Sheet for tracking performance in AEO and GEO

    To use this spreadsheet, just manually work through each of the columns below, for each of the queries where you think you should appear *:

    Query
    → Type the exact search query or question you used (e.g., “What is Generative Engine Optimization?”).

    Date
    → Enter the date you performed the search (format: DD/MM/YYYY).

    Platform (Google / Perplexity / ChatGPT etc.)
    → Note which platform or search engine you used (e.g., Google, Perplexity, ChatGPT).

    Mode (Normal / Incognito / VPN)
    → Indicate how you accessed the platform (e.g., Incognito, Signed-in, VPN).

    Category (SEO / AEO / GEO)
    → Choose which category the result fits into:

    • SEO = Traditional search results
    • AEO = AI or answer-engine snippet
    • GEO = Full generative or LLM-based output

    Result Source (Regular / AI Overview / AI Mode)
    → Identify where the result appeared:

    • Regular Search = Standard search results
    • AI Overview = Google’s AI summary panel
    • AI Mode = Full generative output replacing the usual page

    Result Type (Direct Link / Snippet / Generative Synthesis)
    → Note how your content appeared:

    • Direct Link = SEO link in results
    • Snippet = Text summary or mention
    • Generative Synthesis = LLM or AI-generated paragraph referencing your content

    Did you appear? (Yes / No)
    → Simply record whether your site, name, or content appeared.

    How did it appear? (Quoted / Mentioned in Narrative)
    → Describe how you appeared:

    • Quoted = Direct quotation or citation
    • Mentioned in Narrative = Indirect or paraphrased mention

    Screenshot File
    → Add the filename of your saved screenshot (e.g., Screenshots/7.png).

    * Or, for help getting this set up for yourself, give me a shout on Contact Us to find out more!

  • Focus on quality – lessons from Zen and the Art of Motorcycle Maintenance

    Focus on quality – lessons from Zen and the Art of Motorcycle Maintenance

    Introduction

    What is “quality” content?

    If you’ve ever read Zen and the Art of Motorcycle Maintenance by Robert Pirsig, you’ll know his reflections on craftsmanship go far beyond fixing bikes. As he wrote:

    “[In a craftsman], the material and his thoughts are changing together in a progression of changes until his mind’s at rest at the same time as the material is right… The state of mind which enables a man to do work of this kind is akin to that of a religious worshipper or love. The daily effort comes from no deliberate intention or program, but straight from the heart.”

    So what does this have to do with marketing B2B software? Everything. Quality content doesn’t just appear—it’s the product of care, thought, and a refusal to cut corners. And in a world where ChatGPT and Gemini can churn out endless filler, the question is: how do we balance speed with true understanding?


    1. Quality comes from craftsmanship, not shortcuts

    Pirsig’s idea of quality begins with the mindset of a craftsman: someone who approaches their work as an act of care. Marketing is not so different. It’s easy to produce “good enough” content that ticks the boxes—an SEO-friendly headline, a keyword-laden body, and a quick conclusion. But “good enough” rarely makes an impact.

    Craftsmanship in marketing means paying attention to the details that others overlook. It’s about making sure the story fits the audience, the examples resonate, and the argument stands up to scrutiny. That effort creates trust with your audience.

    Shortcuts, by contrast, show through quickly. Readers can tell when a piece was rushed or when it lacks real understanding. In B2B, where buying cycles are long and decisions are complex, that kind of shortcut can erode credibility.


    2. It’s about how you conduct yourself to get the right results

    Content is at the core of almost every marketing role. But while output matters, how you go about producing it matters even more. The process you follow—whether you take time to think, reflect, and refine—directly affects the quality of what you publish.

    This isn’t about perfectionism. Few teams have the luxury of unlimited time, and deadlines are real. Instead, it’s about finding the balance between speed and depth. Are you creating something your company can stand behind, or are you cutting corners for the sake of throughput?

    The companies that consistently build trust with their audiences are the ones that find this balance. They know that content is not just a task to be completed, but a reflection of their brand, their people, and their standards.


    3. AI tools can’t replace real understanding

    Many marketing teams are now experimenting with ChatGPT, Gemini, and other AI writing assistants. At first, it seems like a time-saver. You can generate an outline, expand bullet points into paragraphs, even draft an entire blog in minutes.

    But there’s a problem. If you rely entirely on AI, you end up with content that looks right but lacks depth. It’s the equivalent of a student handing in an essay written by someone else: the words are there, but the understanding is missing. Readers pick up on that lack of authenticity quickly.

    I know this from experience. I once tried using ChatGPT to write content as an experiment. On the surface, it looked fine. But the work felt hollow, and I learned nothing from the process. I decided never again—because without understanding, content has no foundation.


    4. Real learning builds pride in your work

    A better approach is to use AI sparingly—as a support, not a substitute—and to invest in real learning. Recently, I needed to use Docker to isolate two jobs running in Linux. I could have relied on ChatGPT prompts like: “Use Docker to create 2 new environments for me.”

    The AI gave me fragments of an answer, but nothing that built my understanding. I didn’t know why containers worked, what the benefits were, or how environments could communicate. If I had stopped there, I would have had a shallow solution—and no ability to troubleshoot later.

    Instead, I worked it out for myself, guided by Pirsig’s principle of focusing on quality. I learned enough to understand how Docker really worked, and I felt a sense of pride in the result. That pride matters—it creates confidence in your own expertise, and it shows through in the content you create.


    5. Human insight will always beat AI slop

    Ultimately, content written with care—by fallible, thoughtful humans—will always outperform what I call “AI slop.” Audiences crave originality, perspective, and voice. These are things that AI, for all its power, cannot truly replicate.

    There’s also a practical point. Every AI query consumes processing power and energy. By doing the thinking yourself, you save not just your credibility but also resources. It’s a small act of responsibility in a world increasingly flooded with machine-generated text.

    Quality content comes from insight, not automation. It comes from asking the right questions, reflecting on your experiences, and presenting ideas in a way that matters to your audience. That’s why, even as AI becomes more common, human-centered quality will continue to stand out.


    Conclusion

    In marketing, as in Pirsig’s philosophy, quality is not just about the output but about the mindset of the person creating it. AI can be a useful tool for sparking ideas, but it can’t replace the depth of insight you gain from doing the hard work yourself.

    Writing content that matters takes time, thought, and pride. It’s about more than hitting publish—it’s about understanding your craft well enough to stand behind it. That’s the kind of content that will always stand out from the noise.

    For more on this idea of quality in work, see:


  • Rewriting Content Strategy with LLMs and Pinecone

    Rewriting Content Strategy with LLMs and Pinecone

    I’ve been experimenting with large language models (LLMs) and vector databases like Pinecone — not just as a research interest, but as a working prototype. My goal was to build a system that could retrieve, structure, and surface my own content in a way that’s useful to both people and machines.

    What started as a technical exercise quickly turned into a content strategy rethink. The more I worked with embeddings, retrieval, and prompting, the more obvious it became that most B2B SaaS content — mine included — isn’t really designed to be useful in an LLM-shaped world.

    This post is a set of observations from that process. It’s not a how-to, and it’s definitely not marketing advice. It’s just a few things I’ve noticed while trying to make my content more legible — to machines, yes, but also to myself.


    1. LLMs don’t skim, they distill

    One of the first things I noticed was how differently LLMs process content. They’re not scanning a web page for formatting cues or crawling a hierarchy of headings. They’re vectorising meaning — pulling intent and structure from the text itself.

    This rewards clarity over cleverness. Vague intros, overused analogies, and “setting the stage” paragraphs get flattened. What works best is directness: “This is what the user needs to know, and here’s what we know about it.”


    2. Most content is badly stored

    I had to dig through slide decks, half-written blog drafts, and internal notes to feed the system anything useful. And even when I did, it wasn’t in a format the LLM could make much sense of.

    A lot of our content isn’t unfindable because it’s private — it’s unfindable because it’s scattered, fragmented, and inconsistently written. Structuring information (even just basic metadata and formatting) turned out to be more useful than adding “AI” to anything.


    3. Answerability is the new readability

    When I tested my system by asking questions for Syskit — “What are common governance risks in Microsoft 365?”, for example — it only worked if the source material actually contained answers. Not positioning. Not messaging. Actual sentences that respond to an implied question.

    I started to think of this as “answerability”: could this content, in its current form, directly answer a user or AI prompt? If not, it’s probably not useful — not to the system, and not to anyone else either.


    4. Consistency matters more than tone

    LLMs are surprisingly good at detecting contradiction. If one post says we support something and another implies we don’t, the system flags ambiguity. That’s useful — but also a bit exposing.

    I used to think consistency was about branding. Now I think it’s about information integrity. If the machine can’t reconcile what you’re saying across multiple assets, it won’t confidently say anything at all.


    5. Structure beats style

    There’s nothing wrong with good writing. But good structure — clear subheadings, defined sections, and consistent terminology — outperforms style every time when you’re working with LLMs.

    Most of what I had to rewrite wasn’t because the sentences were bad. It’s because the paragraphs had no job. There was no signal about what a block of text was meant to do: define, explain, compare, warn, resolve.

    Once I started thinking about content structurally — almost like documentation or an API — everything started working better.


    6. You can’t fake this with ChatGPT

    There’s a temptation to take short-cuts: paste your post into ChatGPT, ask for SEO suggestions, then call it LLM-optimised. But when you’re building your own retrieval stack, you realise pretty quickly that what matters isn’t how AI generates content — it’s how it understands it.

    Most B2B content isn’t referenceable because it’s too shallow, too scattered, or too brand-filtered. You can’t prompt your way around that. You have to fix the source.


    Final thought

    Building with LLMs — even in a small way — forced me to re-evaluate how I write, store, and structure information. The tools didn’t just change the output. They changed how I think about the inputs.

    That seems worth paying attention to.


  • Stage 3 – going beyond keyword search

    Stage 3 – going beyond keyword search

    When building search tools, intelligent assistants, or AI-driven Q&A systems, one of the most foundational decisions you’ll make is how to retrieve relevant content. Most systems historically use keyword-based search—great for basic use cases, but easily confused by natural language or synonyms.

    That’s where embedding-based retrieval comes in.

    In this guide, I’ll break down:

    • The difference between keyword and embedding-based retrieval
    • Real-world pros and cons
    • A step-by-step implementation using OpenAI and Pinecone
    • An alternative local setup using Chroma

    Keyword Search vs. Embedding Search

    Keyword-Based Retrieval

    How it works:
    Searches for exact matches between your query and stored content. Works best when both use the same words.

    Example:
    Query: "What is vector search?"
    Returns docs with the exact phrase "vector search".

    Pros:

    • Very fast and low-resource
    • Easy to explain why a match was returned
    • Great for structured and exact-match data

    Cons:

    • Doesn’t understand synonyms or phrasing differences
    • Fails if the words aren’t an exact match

    Embedding-Based Retrieval (Semantic Search)

    How it works:
    Both queries and documents are converted into dense vectors using machine learning models (like OpenAI’s text-embedding-ada-002). The system compares their semantic similarity, not just their words.

    Example:
    Query: "How does semantic search work?"
    Returns docs about “meaning-based search” even if the words are different.

    Pros:

    • Understands intent, not just keywords
    • Great for unstructured content and natural queries
    • Can surface more relevant results even if phrasing is varied

    Cons:

    • More computationally intensive
    • Results are harder to explain (based on vector math)
    • Requires pre-trained models and a vector database

    Feature Comparison Table

    Feature Keyword-Based Retrieval Embedding-Based Retrieval
    Search Logic Matches words exactly Matches by meaning
    Flexibility Low High
    Speed Fast Slower
    Resource Use Low Higher
    Explainability High Low
    Best For Structured search Chatbots, recommendation, unstructured data
    Common Tools Elasticsearch, Solr Pinecone, Chroma, FAISS

    Setting Up Embedding-Based Retrieval

    Let’s build a basic semantic search system using:

    • OpenAI (text-embedding-ada-002)
    • Pinecone (hosted vector DB)
    • Chroma (optional local alternative)

    1. Choose Your Tools

    Embedding model:
    OpenAI’s text-embedding-ada-002 or a local Hugging Face model.

    Vector database:
    Cloud: Pinecone (scalable, managed)
    Local: Chroma (open-source, lightweight)

    2. Install Required Libraries

    pip install openai pinecone-client chromadb

    3. Set API Keys

    export OPENAI_API_KEY="your-openai-key"
    export PINECONE_API_KEY="your-pinecone-key"

    In Python:

    import openai
    openai.api_key = "your-openai-key"

    4. Generate Embeddings

    def get_embedding(text):
        response = openai.Embedding.create(
            input=text,
            model="text-embedding-ada-002"
        )
        return response['data'][0]['embedding']
    
    documents = [
        {"id": "1", "text": "This is an introduction to embedding-based search."},
        {"id": "2", "text": "Embedding-based retrieval finds similar meanings."},
    ]
    
    for doc in documents:
        doc['embedding'] = get_embedding(doc["text"])

    5. Store in Pinecone

    import pinecone
    
    pinecone.init(api_key="your-pinecone-key", environment="us-east-1")
    
    index_name = "embeddings-index"
    pinecone.create_index(index_name, dimension=1536)
    
    index = pinecone.Index(index_name)
    
    to_upsert = [(doc['id'], doc['embedding'], {"text": doc["text"]}) for doc in documents]
    index.upsert(vectors=to_upsert)

    6. Perform a Semantic Search

    query = "How does semantic search work?"
    query_embedding = get_embedding(query)
    
    results = index.query(query_embedding, top_k=5, include_metadata=True)
    
    for match in results["matches"]:
        print(f"ID: {match['id']} | Score: {match['score']}")
        print(f"Text: {match['metadata']['text']}\n")

    Optional: Use Chroma for Local Embedding Search

    import chromadb
    
    client = chromadb.Client()
    collection = client.create_collection("documents")
    
    for doc in documents:
        collection.add(
            documents=[doc["text"]],
            embeddings=[doc["embedding"]],
            ids=[doc["id"]]
        )
    
    query_result = collection.query(query_texts=["How does embedding retrieval work?"], n_results=5)
    print(query_result)

    Evaluate the Results

    Once you’re set up:

    • Check result relevance
    • Tune your top_k or switch models if needed
    • Add keyword filtering for hybrid search

    You now have a foundation for building:

    • Intelligent assistants
    • Internal knowledge base search
    • Chatbots that retrieve based on meaning

    What’s Next?

    You can scale this up to thousands or millions of documents. Consider:

    • Crawling blogs, docs, or Notion pages
    • Combining embeddings with filters or metadata
    • Using hybrid keyword + embedding pipelines for speed and precision

  • Stage 2 – making sense of the chaos

    Stage 2 – making sense of the chaos

    This is the part where all the content sources came together into a centralized system I could actually interact with.

    This post is a cleaned-up record of what I built, what worked, what didn’t, and what I planned next. If you’ve ever tried to unify fragmented notes, decks, blogs, and structured documents into a searchable system, this might resonate 🙂


    What I Built

    There were two main components at the heart of the system:

    1. Batch Processing Script
      PopulateChatSystemDataRepository.py — this was run manually to gather and format all source data into a single repository. My plan was to automate it later.
    2. Continuous Scanner
      A lightweight background service monitored for new blog posts and updates.

    At that point, the batch script did the heavy lifting, though I intended to shift it onto Google Cloud Run to handle scale.


    Where the Data Lived

    The sources I processed included:

    • PowerPoint files
      These were manually selected and hardcoded into the script — a reasonable tradeoff given how few I needed to track.
    • RSS Feeds
      • My blog at bjrees.com
      • A few curated industry insight feeds
    • OneNote Notebooks, such as:
      • Project documentation (e.g. Skynet, The Oracle)
      • Notes from a Cambridge Judge Business School programme
      • Third-party and personal research logs
    • iCloud Backups
      These contained archived slide decks and supporting materials.

    All of this data was funneled into a staging area for eventual vector embedding and retrieval.


    Microsoft Graph API + OneNote

    To pull content from OneNote, I used the Microsoft Graph API. First, I installed the required libraries:

    pip install msal requests
    • msal handled authentication via Azure Active Directory
    • requests allowed me to interact with the Graph API endpoints

    Once I authenticated, I could enumerate and query notebooks like this:

    python ExtractNotes.py

    After logging in via a Microsoft-generated URL, I could successfully extract content from all the notebooks I needed.


    Licensing Curveballs

    At the time, I hit a snag: my Microsoft 365 Family plan didn’t include SharePoint Online, which was required to query OneNote via the Graph API.

    I weighed my options:

    1. Pay for a Business Standard plan (~£9.40/month)
    2. Try and use my home license in some way, even thought it didn’t seem to have what I needed for OneNote

    I went with option 2, supported by a one-month free trial of Microsoft Business Basic to help validate the approach.


    Google Sheets as the Backbone

    The ingestion script used a JSON keyfile to interact with Google Sheets. It opened the sheet like this:

    client.open_by_key(sheet_id).sheet1

    Sheets acted as a live database — but I ran into 429 rate-limit errors, especially when repeatedly reading the same files. To solve this, I built a basic checkpointing system so the script would:

    • Cache previously processed records
    • Avoid re-downloading the same content every time
    • Track progress and only fetch new entries on each run

    The GitHub Reset

    After a short break from the project, I realized the codebase had grown too complex. I had introduced a lot of logic to deal with throttling and retries, but it made everything harder to understand.

    So I rolled back to a much earlier commit and started again from a simpler foundation.

    It was the right move.


    What Came Next

    Here’s what I tackled after that cleanup:

    • Migrated the whole project to an old home laptop
    • Simplified the ingestion pipeline
    • Ensured each run processed only new data, not the full archive
    • Finalized access and querying via Microsoft Graph API for OneNote and SharePoint content

    Reflections

    Skynet began as a chatbot experiment, but evolved into something bigger — a contextual knowledge system that drew from years of notes, presentations, and personal writing.

    Stage 2 was about turning chaos into structure. The next phase was even more exciting: embeddings, retrieval, and building a system that could answer real questions, grounded in my own work.


    Read Stage 1 if you missed the start.


  • Making decisions in a Bayesian world

    Making decisions in a Bayesian world

    Most of your time as a marketing leader is spent trying to make decisions with inadequate data. In an ideal world, we would have run an A/B test on everything we wanted to do, looked at the numbers and then made a decision. Which image should we use for our new advert? What message? What tone? Which type of customer are we trying to reach? And 1,000 other things.

    A/B testing is one way of approaching this problem. The difficulty is that most marketers will – and should! – already have a view. If I was given the choice between two headlines:

    1. Find out how our products can help you
    2. Click here to give us some money

    I know which one I would click on, I definitely don’t need to do a test!

    But here is a more realistic example. You are trying to sell into a company and you are not sure who makes the decision. Is it the end user? The manager? The person holding the strings?

    How on Earth do you do an A/B test for something like this?

    You will soon find with a question like this, that you quickly hit the “Everyone has an opinion“ problem. You ask various members of the team and outside your team and everyone gives a different answer. There are a couple of ways out of this situation as I’ve mentioned, but doing an A/B test is generally completely impractical.

    So what can you do? The approach that I take now is to use some of the concepts from Bayesian logic to help me make the decision. The key concept is is the idea that every decision you make is a combination of your prior knowledge plus the data that you see. And the real issue with prior knowledge is that everyone comes to the table with their own history.

    As an example, if you have been running a marketing team for years, and all you have been doing is numbers-driven digital marketing for B2C businesses – and crucially, you have had success with that, then you are going to start your analysis with that approach in mind – the answer to the question “what should we do next for our marketing?“ will very likely be something around digital marketing strategy. In contrast, if you come to the table from a brand marketing background, then it is likely that your initial opinions will favour this sort of approach. Why? Because this is what you know and there’s a good chance you’ve had success with it at some point in the past.

    Crucially, just asking the question “which is right?“ will not get you anywhere! You each have prior knowledge that you are bringing into the process. So what do you do when you start running a campaign and you start to get results, albeit with very low numbers? How do you combine your prior knowledge of what should happen with what is actually happening?

    This is where the Bayesian approach can be very useful. I don’t think you need to understand any maths to use this approach, it is about the principles behind it.

    I first read about this principle in the book below:

    I have recommended this book about six times on this blog, so I am definitely a fan! The key part that is relevant to this blog post I have copied below. I tried to paraphrase it, but then I realised that Sean Carroll’s short explanation is better than anything I could come up with:

    Prior beliefs matter. When we’re trying to understand what is true about the world, everyone enters the game with some initial feeling about what propositions are plausible, and what ones seem relatively unlikely. This isn’t an annoying mistake that we should work to correct; it’s an absolutely necessary part of reasoning in conditions of incomplete information. And when it comes to understanding the fundamental architecture of reality, none of us has complete information.

    Prior credences are a starting point for further analysis, and it’s hard to say that any particular priors are “correct” or “incorrect.” There are, needless to say, some useful rules of thumb. Perhaps the most obvious is that simple theories should be given larger priors than complicated ones. That doesn’t mean that simpler theories are always correct; but if a simple theory is wrong, we will learn that by collecting data. As Albert Einstein put it: “The supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.”

    Everyone’s entitled to their own priors, but not to their own likelihoods.

    This might feel like a slightly obscure deviation from the subject matter of this blog (marketing!) but I don’t think it is.

    Unlike many other areas, It is very difficult to come up with definitive evidence for why one approach is better than another. This can lead to the back and forth, debate about messaging and other areas.. or worse than that, you can even end up with the HiPPO principle for making decisions (“Highest Paid Person’s Opinion”).

    But – If you understand that this is where a lot of people are coming from, that they are making decisions based on their prior experience and not necessarily the facts in front of them – then this makes it much easier to have a rational discussion. There will be a very reasonable logic behind why somebody is arguing for something. Listen to that person, interpret and apply intelligently.

  • Slaying a few marketing myths

    Slaying a few marketing myths

    We’ve been doing some digital marketing work recently and the more and more time I spend on digital work the more beasts I feel need to be slain.

    NB: I’m talking specifically about B2B marketing here – which is important. It’s important because many of the problems that B2B marketers face come from taking a “copy and paste” approach from B2C in to B2B. But I think these two jobs are completely different.

    Myth 1 – A/B testing is valid when writing copy

    i’m not a fan of A/B testing generally, mostly for statistical reasons – these tests are almost never done on a large enough volume to be valid. But even if you did have an enormous data set, would it still be useful?

    I don’t think so. When potential customers are looking for a product that fulfils their needs, the language that you use, particularly on a digital advert, has to be as good as perfect as it can be. Not just the words, the insight, the phrasing, the context and so on. We’ve all seen ads where the copy is just “not quite right”. Are you presenting your product In the most appropriate way? Should you be describing a feature or the advantages and benefits? Should you be targeting someone more senior or more junior? Should the wording be laser focused on a specific use case, or more generic?

    The answers to these questions won’t come from an A/B test. They will come from sitting in front of a customer talking to them about their business and drivers, then finding a way to formulate that into something appealing and simple. And that’s why marketing is such an interesting discipline to work in!

    Myth 2 – More is better

    Surely if you have 10 different messages going out to customers about 10 different value propositions, that’s better than one or two? Surely?

    I don’t think so. I love the phrase “You will get bored of your marketing before your customers do”. If we are lucky, very lucky then our customers will be able to remember one thing about us, about what we do. It might be something like “Do you do security software or something?”. Or “Are you a Microsoft add-on?”. To try and get this message inside the heads of potential customers, it has to be repeated over and over and over. Then, if you are lucky, when they have a problem that you can solve they will have an aha moment when they remember “oh yes Syskit, they do something for that don’t they?”. They will then Google search your company name, find you read your website and make a decision about whether to go further.

    This is a massive win – It’s your brand advertising dollars at work. That you undermine this advertising if you keep chopping and changing what you are saying. If one day you’re selling on price, the next day on functionality, the next day on something else then they won’t know what you do and they won’t think of you when they have a problem you could solve.

    So choose the single killer feature, figure out why customers should care and then repeat, repeat, repeat.

    Myth 3 – If I can’t show the ROI of a campaign, I shouldn’t run it

    Perhaps one of the most dangerous in marketing. There are ways of showing the ROI of certain sorts of activities, for example I think it is possible to show the return on exhibiting at a conference (add up the spend, add up the opportunity value from the people who attended over the subsequent few months, and so on).

    But for 95% of what you do, this isn’t possible. And this is where the big difference between B2B and B2C marketing is apparent. I believe it is impossible to measure “the experience” of the customer interacting with your advertising or not. For most messages that a customer sees, that isn’t measured anywhere, particularly not by Google. Of course they say they do, but if you spend time with the numbers you realise how much is missed.

    Given this, I feel there’s an enormous amount of value that comes from certain sorts of marketing work, weather content, advertising or whatever. But it would be very dangerous to switch that off just because you couldn’t “quote “prove” its value. You wouldn’t understand the mistake that you had made until it was too late, when you have cut the advert and moved on to something else. So have faith that it is working and keep your eye on the messaging, to make sure you’ve got it laser focused.

    Myth 4 – Exhibiting at events is a waste of time

    Events are expensive to attend. The exhibition fees, travel, hotels, meals and much more. So the question is, are they worth it?

    I think they are but not necessarily in the most obvious direct way. For me, meeting potential customers in any way is the most important activity you can do. It is very difficult to just bump into customers so you need to find somewhere where they congregate (NB: going to visit them one to one is also a great use of time).

    The reason I think it’s so important is because you can have proper in depth honest conversations with attendees. What do they really value? What do they really think of your company? Who else do they like and why? I have often spent 20 minutes with a customer on a stand going through the details of their problems, and a lot of that content went straight into adverts or blog posts the next week. It makes the copy very easy – Just parrot back what’s your visitors said, with a little anonymity and hey Presto! It almost feels like cheating.

    There is of course a question of ROI which often comes up. And that should definitely be considered – You shouldn’t be flying around the world for an expensive event while there will only be 15 attendees. But assuming you’re making smart decisions about budget and the types of people who attend then, it’s very possible you will generate some interest which will cover your costs and you will get the incredible insights about the market for free.

    There are many more myths to be slain, and I’ll add them in as I remember them! At Syskit, we are clearly a 100% B2B company. All the marketing we do is in that model. This makes working here much easier as you know which advice to take on and which to ignore. It also focuses your time more on understanding customers, and a bit less on the latest tricks and trips from Google.


  • How to add ChatGPT to your own website

    How to add ChatGPT to your own website

    There are many stages of exploring ChatGPT:

    • Reading about it on the Internet
    • Finding a website with a chatbot on it (for example, https://chat.openai.com/), and having a go yourself, if only to see what everybody is talking about
    • Adding a generic chatbot to your own website. I’m not quite sure why you would do this, but it’s part of the process understanding how to integrate ChatGPT into your website
    • (now it starts to get more interesting…) automatically creating an FAQ for your website based on your content
    • Creating a ChatGPT bot that can go on your site for your customers to use to find out more about you and your company

    I’ll talk about the first four points here and then, in the next article, the last point. This is a considerably bigger task, so needs a post of its own. The end goal is to allow customers and potential customers to come to your site and ask questions about your offering. There are two advantages to this approach:

    • If you’re resource constrained, you don’t have the people to be on the phones answering questions all the time.
    • Consistency. You can manage and see what’s being said to your customers on the website.

    But where should I start?


    Reading about it on the Internet

    Not a whole lot more I can add here. If anything it’s hard to escape articles about the topic. The BBC has some good articles

    Playing with a chatbot yourself

    The first question most people have is “But are these things any good? They’ll never fool me!”. Don’t listen to others, try it out yourself. I’d suggest that the openAI website itself is a great starting point. You may need to create credentials first, but spending some time here will really show you the power of what everybody is talking about. Here’s a pretty random example. I asked “What is account based marketing?”:

    That’s very good. Yes, it’s a little generic, but I’ve done that with no effort, no research. If you wanted to find out about a new topic at work, 30 minutes with chatGPT would get you on your way.

    Adding a generic chatbot to your site

    Really this is a preparatory step before going on to the next more interesting stage. But it does introduce some of the useful resources. 

    I use WordPress for my site, set my example here is for WP. But the principle is the same – the difficult bit is creating the training data and then training up a model. If you can do that then getting it on WordPress is easy.

    I started at: https://www.forbes.com/sites/barrycollins/2023/02/18/how-to-build-a-chatgpt-chatbot-for-your-website-in-minutes/. Rather than me writing out a step by step guide, all of which I would be plagiarising from this and related sites, I’d suggest working through the guide here (if you’re willing to wait through all the pop ups that plague of the modern website!). 

    This is where you’re really starting:

    In particular, I want to highlight the Jordy Meow plugin. This is an incredible bit of kit, I was repeatedly pleasantly surprised by what was available and how easy it was to install and get working. This is no mean feet given that we’re moving into the territory have training AI models. 

    Like any WordPress plugin you install it from your dashboard. Then, on your WordPress site you’ll have something that looks a bit like the following:

    Creating an FAQ for your website

    So far we’ve looked at generic chatbots which are all over the web. But you want something for your website, based on your industry. 

    Again, I’m not going to go through the details of doing this because there are some fantastic notes on the AI engine help pages, and it will be different for different sites. But the most important point, the place where you need to spend most time and the place where you can really differentiate is on the training content. This will sound familiar to anybody who’s worked in marketing, but if you’re creating something to help you generate interesting content, then you have to have some interesting content to start with. I’ve used the process on this site to create 100+ questions and answers, without having to write a single question myself. The engine is so powerful that you can just give it a block of well written marketing text and it will automatically create some questions and answers from that text. To create the FAQs on this site I simply fed the engine the 94 blog posts I’ve written over the last 10 years and asked it to give me some questions based on this input. 

    You can see some of the results on this page. Remember all of these were auto generated, including the actual questions:

    I’ve been enormously impressed by AI engine and the work done by Meow apps

    All great, but isn’t this a marketing blog? This just feels like a lot of technical detail! Well yes, that’s true. But one of the ways you can differentiate yourself from the crowd as a marketer is by moving on from just talking about technology to actually showcasing it. You can significantly boost your career by properly understanding how AI technologies can impact marketing. This needs to be more than just “Add AI to your marketing efforts!”. Your claims need substance and this is where the hard work comes in. I had the advantage that I’ve been writing blog posts for 10 years or more, so I had the source material. But you have to start somewhere, and this is one way to take the content that you’re writing and getting out to more people in a more palatable form.

    Any further questions please feel free to get in touch to discuss how I can help.


  • The Marketing Flywheel

    The Marketing Flywheel

    New Year, new marketing plans. Hopefully by now you’ve kicked off various activities and you’re waiting to see how those early campaigns are working out.

    The other thing I see in marketing departments at this time though is burnout. Everyone is trying to do everything either because there’s no real strategy there (“let’s throw everything at the wall and see what sticks”). Or it could just be bad planning (“the start date for every campaign is the 1st of January”).

    Either way, you might soon be revisiting the strategy discussion. Specifically, why are we doing activity A? Can we kill activity B? Is activity A working yet? That activity can quickly turn into navel gazing, when what you need is focus and a way of choosing what you should be really worrying about. To that end, I’ve been using the flywheel model below for years now. The point of the model is that you have a list of metrics and activities you can look at to check whether you would actually doing them and doing well. As a simple example: if nobody is coming to a website to talk, what should you do? Should you hire a content writer? A designer for the website? A product marketer? The diagram and notes below give what I think the “next best” activities, based on splitting the marketing flywheel into five stages.

    The marketing flywheel for senior decision-makers (SDMs)

    This first diagram is for senior decision-makers (SDMs). There are no hard and fast rules here, but generally these are people who are less likely to be actually using the product themselves but certainly influence the buying decision heavily.

    For each part of the flywheel, I’ve put what I think the most impactful activities. If I only have time to do one thing, what is it? Looking at the first diagram, if you’re brand-new into a market (nobody knows about you) and you’re trying to sell to senior people, where should you spend your money? You must create awareness of the brand first. I nothing else will work without this first. So your first activities have to be things like PR, analyst relations, thought leadership, at some budget for LinkedIn. If you were spending money on complex lead qualification processes, when you have no leads to qualify!, then you’re burning money.

    The marketing flywheel for end users

    This second diagram spend users. Meaning you are advertising to the people who are actually going to be using the products. That means they’re likely to be more junior and have very different requirements (for example they’re likely to care about usability and less likely to care about long-term financial benefits in the organisation).

    Here, the marketing is different. End users don’t read the same things as senior decision-makers. They’re far more likely to do a Google search for a particular problem they’ve just hit a than for an in-depth analyst report.

    What does this mean for marketing budget? If there is a community of users, then you need to reach out to them. If not, PPC and SEO crucial. Either way as you get further through the flywheel the product has to be amazing (for end users, there’s nothing you can do in marketing that will overcome an unusable product).

    Hopefully this is useful as a way of making sure you’re making a big impact to the start of the year without burning everybody out and without burning through your whole budget by Valentine’s Day. If your plan is to “do everything” then that’s not strategy, that’s a recipe for employee burnout and empty pockets.


  • Five Myths About The Marketing Revenue Engine

    Five Myths About The Marketing Revenue Engine

    I love the book Rise of the Revenue Marketer. In it Debbie Qaqish describes the need for a change program to move your marketing department from being a cost centre (“We’re not sure what marketing do, but we need them to do the brochures”), to a revenue centre (“They’re responsible for generating a significant proportion of our company’s revenue”). Though the journey is easy to describe, it’s a long and arduous path to take.

    We at Redgate have been on this path for a while now, and we’ve made enormous progress, particularly in the last 12 months. But one of the things that slowed us down was holding on to certain beliefs about how to measure marketing performance, how to measure the impact of marketing work – and holding on to those beliefs for too long, when perhaps they just weren’t true. Lots of these ideas came from conferences, blogs, books and make a lot of sense on paper. But when you get to the real world of implementing something, the reality is not always as expected.

    Here I’ll go through five “myths” that I found to be unfounded. Of course, these come with big caveats – we’re one specific org, with a specific market, with particular advantages and disadvantages – so all should be taken with a pinch of salt. Still, with that caveat in mind here are my five, starting with the most controversial:

    Myth 1: Attribution Models are Useful

    The idea of a marketing attribution model is that you can take every lead, opportunity or sale and somehow work out “What were all of the things we did in marketing that contributed to that outcome, and what value would we give to each of these things?”. For example, I just generated a lead, I could go back and look through the path history of that individual, find that she clicked on a PPC ad, attended an event, did a Google search, interacted with us on Facebook, and so on. I then have some smart “multi-touch” model that assigns value to each of these (maybe the first or last get higher scores? There are lots of alternatives). If you then know the value of a lead (let’s say, $10), you can work out the Return on Marketing Investment (ROMI) for each activity by comparing the “value” (e.g. maybe $3 for the PPC click), against the spend.

    But, I think this is baloney. This is a classic example where – just because you can do the maths, doesn’t mean to say the results are accurate or useful. The model is flawed for at least the following two reasons:

    1. Data. It’s impossible to get all of the data about an individual’s path history – everything they’ve done, interacting with your brand over the last few years. Not difficult, but impossible. You don’t know about the offline activity they’ve done, you don’t know about the browsing they’ve done on their mobile, on their home laptop at the weekend, you’re very unlikely to have a link to their activity from three years ago (when they actually discovered the brand) and so on. NB: some MarTech orgs promise they can deliver on all these things, but I don’t believe them!
    2. Over-simplistic view of how customers learn about a brand. The reality is that an individual will have 100s of different interactions with your brand all of which build up to a given perspective. They’ll attend an event, they’ll speak to a specific person on your stand who may or may not be great, they’ll read 100s of different pages on your site, they’ll talk to their colleagues about you, they’ll read third party review sites, they’ll kick the tyres of the software, they’ll see an ad on a news site (without clicking on it!), they’ll remember a comment from their boss two years ago (“Oh, you should check out Redgate, see what they’ve got”), and so on. All of these things somehow add up to a favourable view of your org (or otherwise!) and to try and model that with a simple sequential attribution model isn’t, I think, valid. The best you can hope to do is make sure every interaction with your brand is awesome and have faith that will lead to positive results.

    Okay, maybe it’s not all baloney – but the approach is, I believe, significantly flawed. Nevertheless, there are some things that can be measured – which brings me to myth 2…

    Myth 2: Everything should be Measured

    Not sure this is controversial actually. To quote Seth Godin:

    The approach here is as simple as it is difficult: If you’re buy­ing direct marketing ads, measure everything. Compute how much it costs you to earn attention, to get a click, to turn that attention into an order. Direct marketing is action marketing, and if you’re not able to measure it, it doesn’t count.

    If you’re buying brand marketing ads, be patient. Refuse to measure. Engage with the culture. Focus, by all means, but mostly, be consistent and patient. If you can’t afford to be con­sistent and patient, don’t pay for brand marketing ads.

    The danger is that, in an effort to measure everything and show the return on everything, you stop activities because they’re fundamentally un-measurable. The myth is that “Because you need to show a repeatable, predictable and scalable revenue engine, you need to understand and measure the impact of everything you do”. But that’s taking the argument to an extreme view – the reality is that there will always be spend in your budget where you won’t be able to tie revenue to that spend. Ever.

    Myth 3: You Need a Funnel

    Perhaps controversial again. A traditional funnel implies a sequential path for a customer from something like “Awareness of problem” to “Discovered our solution [to that problem]”, “Evaluated our solution” finally “Becomes customers [then perhaps evangelist etc]”.

    Again, we’ve never found this to represent reality. Of course all models are exactly that – models. They’re not perfect, but if they’re useful, that’s okay.

    But I feel the funnel fundamentally misrepresents how real people actually interact with a brand. From talking to customers what you find is that there are just an enormous number of holes in this approach. For example:

    • The “Awareness of problem” is just too crude. The chances are that your content was very unlikely to be the way people became aware of the problem; that actually their knowledge has built up in a fragmented way over time; that they’re still learning all through the sale, even post-sale.
    • The idea of “stages” like this just doesn’t make sense generally. Often people are already customers of yours – and they’re finding new things you offer. Their understanding of your offering is forever a slow build up (from a theoretical “nothing” many years ago, to some partial understanding now), that it goes back and forth.

    A funnel implies a single direction of travel, a path to enlightenment, ending with purchasing your tool. But, from talking to customers I find a much messier reality – people go back and forth, there are interrupts and so on. We’ve found it almost impossible to actually classify people in to different stages – it’s too over-simplistic to be useful (we’ve found).

    Myth 4: Conversion Rates Matter

    Again, controversial. But our experience is that conversion rates are the lever you are least able to pull. Why? Because for most orgs, they have a pretty well optimised process for converting leads at different stages. At Redgate, there are certain lead types that convert at a 70% conversion rate, within a 2 week period – and that has been consistent for about 10 years, almost regardless of what we do! We’ve spent a lot of time and effort on this stuff – its value is in “Can we improve/optimise this?” – and generally we find we can’t. Of course you monitor it, to make sure it’s not dropping (e.g. because some leads got lost), but otherwise – stop worrying.

    Finally, myth 5…

    Myth 5: This is an Impossible Task

    I wanted to end on a positive. 2-3 years ago, I thought the task of building out a “revenue engine” that was vaguely water-tight, believable and actionable was never going to happen. There were so many holes in the data, it was so hard linking activities to outcomes, that it wouldn’t actually happen.

    I pleased to say that isn’t what has happened. It’s been pretty arduous, but we are now on the brink of a model that allows us to:

    • See the impact of many (but not all!) of our activities
    • Track the resultant leads through to opportunity then revenue
    • Match the activities with budgets to pull out ROMI
    • Use this insight to stop certain activities (already cut a few things), start a few more, and adjust how we do other things.

    A simple example of the last point – in 2018 we ran a number of webinars of different sorts. We tracked through the leads, opportunities and revenue from each of these and found that the impact of having a “star” presenting the webinar (someone big in our community) had a far bigger upside than expected – at little or no additional cost to us, other than the trouble of finding and convincing these stars. I.e. one webinar with a star involved would generate more high quality leads than 2-3 webinars without such a person on the event. So this year, we’re changing our program a little – fewer webinars, but each more impactful with more big names presenting.

    Just a small example, but there are countless more – we’re building out a model where we know how and which levers we can pull (and which we can’t), and at what cost. It took a long time to get there, but it’s finally becoming real. Feel free to get in touch if you want to know more!


  • Building a MarTech Stack at a Small Organisation

    Building a MarTech Stack at a Small Organisation

    I recently spoke at the B2B Ignite conference in London on “Building a MarTech Stack at a Small Organisation: A Real World Example of What’s Worked and What Hasn’t”. Here are my slides from that talk.

    Rules of Thumb

    • Manual first, then automate
    • You’re either growing revenue, or saving costs. Should be able to show this benefit on a piece of A5
    • The business case has to be overwhelming
    • However long/expensive you think it will be to implement – double it
    • Step changes, not incremental improvements

    It’s a lot of pictures, so might be hard to understand without the actual talk! Any questions, feel free to get in touch, always happy to help.


  • Measuring Outbound vs. “Always-on” Marketing Performance

    Measuring Outbound vs. “Always-on” Marketing Performance

    Whenever I meet customers I always slip in a marketing question or two along the lines of “Where did you hear about us? What brought you in to Redgate?”. One of the answers from a couple of months back was:

    Well a year ago, I got a new boss and she told me that I had 6 months to turn our dev team in to a “DevOps” team. I did most of the work then realised the database was causing us real problems. So I did a Google search, found your website, and what you said made complete sense to me – you knew what you were talking about, so I tried your software out.

    Okay, great but – how on Earth do you measure the effectiveness of your marketing activity with answers like this? What’s the ROI of this lead? I know the return (the customer bought the products in the end), but the investment? How can you calculate that?

    I’ve written endless articles over the years about marketing attribution, “performance management”, lead measurement and so on. All with the stated aim of showing “What’s working?” or “What’s the ROI of my marketing budgets?”. All different ways of asking the same thing.

    And yet, years later, after reading many others’ articles, attending conferences, seeing demos from various products claiming to give the “formula” (marketing automation platforms, Google etc), the answers don’t feel any closer. Why is it so difficult?

    Firstly, every business is different with, hopefully, different marketing strategies and tactics – some activities are inherently more measurable than others. A B2C business selling gizmos at $10 through Facebook ads is fundamentally different to a B2B org trying to sell $3m deals to the Fortune 500. Measuring the performance of those Facebook ads where the customer buying cycle is likely very short (“I saw your ad, I clicked Buy Now 10 minutes later!”) is a significantly easier task than measuring the impact of four years of concerted marketing plays to win over a multi-national bank. But still, all feel like hard problems.

    It’s become a cliché in marketing blogs to quote the cliché about “I know half of my advertising works, I just don’t know which half”; but I include it because I think it can be updated to something like “I know what I do to get half my leads, I just don’t know about the other half”. I feel this better reflects that state of play with marketing performance management, hopefully I can explain why.

    At Redgate we fundamentally get two sorts of leads – what I refer to as “Always On” leads, and “Outbound” leads. I think, for the first type, measurement of marketing ROI is integrally difficult, I’d argue impossible. For the 2nd type, you can measure ROI and should do so at all times.

    Tackling the latter first – these are leads where you can make a strong argument that, if it wasn’t for a given activity, you wouldn’t have got the lead. An obvious example is an event or a graphical ad in the trade press. There’s a pretty solid chance that if you hadn’t been at that event, or placed that ad, you’d have missed that person. The trigger that caused that person to make an enquiry was paid for, by you. Of course, there are arguments that they might not have really been interested if they hadn’t recognised your brand from years of prior work; or that “Maybe they would have come to your site anyway!”. But that over-complicates what is already a difficult problem. If you spent $10k going to an event, and you made $20k, you should attribute that $20k to that $10k spend – simple.

    This is the half you can understand and measure. For media ads, events, webinars, other placement spends, I think you can put together a pretty good spreadsheet or other tool showing the ROI on your investments – you just need to do the graft (or use a tool – I’ve been most impressed by Pardot recently, which seems to do this sort of thing very well).

    But what about the first half? What triggered the customer coming in to find out about your offering? It was his boss telling him to find a solution. It was nothing to do with your marketing. Like a lion waiting for her prey to walk past, your job is to be “Always on”, ready with the right material, the thought leadership, the clear CTA, the understanding of the customer. Most of these customers do their own research well before they talk to your sales people – they read your website, they read other peoples’ websites, they talk to their analysts, talk to their colleagues. You may have made investments in all these areas both in terms of $$ spend, and employees’ time (web copy, site architecture, positioning and messaging, briefings with analysts, content placement, blog posts, influencer programs) but really how can you apportion this effort to the lead?

    This is a hiding to nothing. Trying to work out how the salary of the individual who spent half of her time talking to Gartner, Forrester and IDC can be attributed to the leads that came in this year is both an impossible and pointless task.

    The reason this is called “Always on” lead generation, is because, like the lion waiting for prey, you have to always be there when the customer goes looking. If you’ve got the money, you have a pack of lions covering every location – every analyst, your website, 10 other websites, recommendation sites, articles in the trade press and so on. But when that customer does a Google search, you’re ready and waiting with the greatest copy and thought leadership you can muster.

    [A quick aside about SEO and PPC – this is table stakes. If a customer out there has a need for your solution and he/she doesn’t find you very very quickly through Google then you have a more immediate problem that needs fixing immediately. Customers not finding you easily when they’re already out there looking is a fixable problem, primarily through good SEO practice.]

    The important subtlety here, and why this is such an issue, is the driver of interest for customers. For the outbound activity, it’s legitimate to say that your actions have precipitated that activity – have, in some way, driven that behaviour. Theoretically if you do more of these activities, spend more money, you’ll get more leads – if I spent $10k this month on webinars, then it’s possible that spending $20k might double the number of leads (ignoring issues like diminishing returns).

    But for the “Always On” leads, you haven’t driven this behaviour. If you doubled your spend on say “hiring even better copywriters for our blogs”, would that double leads from this source? No – because the primary drivers are out of your control – they’re in the hands of the businesses you serve. May be you can improve conversion somewhat, but I’d suggest the attribution is very tricky.

    So overall, I do think it’s possible, and essential to show where half of your leads come from, the ones where you’re precipitating the activity yourself. This information should be readily available in Excel, PowerBI, Pardot, whatever and you should be reviewing it constantly – was the spend right? What can we double-down on? Cut?

    But for the other half – stop worrying. Abstract calculations based on employee time or similar are pointless. You’ll never really understand the complex interaction of customers and touch-points that led to that lead – what value does it bring, knowing that the customer read 129 different parts of your website before getting in touch? – so stop worrying, and focus on measuring what you can measure.


  • There are Three Types of Marketing – Inbound, Outbound and… Plain Rude

    Reading one of the many number of content marketing pieces from HubSpot, I noticed the following from a basic piece on What is Digital Marketing?, after paragraphs about the virtues of Inbound marketing techniques:

    Digital outbound tactics aim to put a marketing message directly in front of as many people as possible in the online space -- regardless of whether it’s relevant or welcomed. For example, the garish banner ads you see at the top of many websites try to push a product or promotion onto people who aren’t necessarily ready to receive it.

    Now HubSpot obviously have an agenda here – their whole business model rests on the validity of the Inbound marketing approach over Outbound approaches (such as “garish” banner ads ), and so they’ve over-stated their belief in the inefficiency of ads. But are ads really garish and intrusive? Are they really “push” advertising (rather than the “pull” of good content)? What’s the problem here?

    Since becoming CMO of Redgate and, perhaps foolishly, updating my LinkedIn profile to reflect this, I’ve started receiving endless emails from agencies, recruiters, marketing data organisations and so on. And many of these are what, I would call, if not rude, certainly intrusive and over familiar. These techniques have been written about elsewhere – this week alone I’ve had:

    • Use of “RE: Our conversation” in the subject line (really, I don’t remember this!?)
    • Taking names from my LinkedIn network and saying “Your colleague <Insert Name Here> said I should speak to you…” – when I know that’s not true
    • Assumptive closes (“Shall I book 20 minutes in for a chat on Wednesday?”)
    • Stalking (early messages which seem innocent enough, chatting about marketing issues, but then soon turn in to sales patter)

    …and so on.

    I find all this pretty intrusive. But isn’t it just the same thing as “garish” banner ads, intruding on my field of vision, when I’m trying to get something done on the Internet? Interrupting my work when it should be me in charge of my flow (as per the Inbound model)?

    I think this is to overstate the intrusion from banner ads. Firstly, yes there are very interruptive ads which fill the screen, and you have to either play “hunt the X” to try and close them, or wait 15s before you can move on. These are pretty annoying. But most graphical ads aren’t like that – they’re well branded rectangles, which are as ignorable as you like. As a marketer I hope you’ve picked up on the branding, noticed a message, that the ad has lodged somewhere in your subconscious, so that next time you’re looking for a solution you think, “Oh yeah, who were those Redgate guys?”. But of course, you might just ignore them (and I’d be very surprised of you clicked on them – we all know the stats on banner ad click-through rates), and that’s fine.

    I don’t feel this is nearly as intrusive as aggressive cold-calling and emailing – these are marketing techniques too, but exhibit the worst traits of “push” marketing – interruptive, based on your timetable, not mine and quite frankly, not leaving me with a particularly positive experience of your company. A well designed ad, perhaps with humour, certainly beautifully designed isn’t in the same category.

    As I say, HubSpot have an obvious agenda – to push the Inbound model and disparage outbound techniques, but the latter shouldn’t all be tarred with the same brush. Ads are as popular as ever on the web, and as more options for personalisation and targeting become available to graphical media – combined with the deluge of mediocre content – I feel this un-intrusive channel will have a resurgence.

    But you’ll never find me pushing the dishonest cold-call/email (“I spoke to your colleague yesterday about how we could help you..” – no, you didn’t!). That’s truly interruptive marketing, which does nothing but damage to your brand.


  • Write Content That People Actually Want to Read

    This feels like a pointless blog post – the think I’m going to say seems so obvious, I shouldn’t need to say it. Still, I see examples where this doesn’t happen, so perhaps it’s worth re-iterating the point.

    Here’s the incredible insight – if you want people to read content that you write, then it has to be genuinely interesting or useful for them. Erm, that’s it.

    Standard marketing practice is to find customers, acquire them and retain them – not complicated (extremely difficult! But not complicated). Inbound marketing turns the first of these on its head – you help customers to find you; then you acquire and retain them. And the main point I want to make here is that for any of this to work, the content you create has to be something people actually want to read, comment on and share. I don’t know why this point needs to be made but, as I say, I see examples all the time of content created which ticks the boxes for the marketing department (it’s about our products, tick!), but which no-one in their right mind would ever actually be interested in.

    There are 1000s of books, articles and blogs on inbound marketing – about being non-interruptive, matching the customers’ journey (rather than forcing them down an artificial journey of your own making), the importance of SEO, of remarkable content and so on. For a great primer, I’d strongly recommend HubSpot’s book Inbound Marketing: Attract, Engage, and Delight Customers Online (and more on them later).

    But the model is simplicity itself – whether you’re B2B, B2C, appealing to Gen X, Y or Z – customers today don’t like being interrupted. As a marketer you need to gain their permission to interact with them. How do you do that? By creating things (blog posts, videos, webinars etc) that they genuinely find interesting. If people find it interesting, they’ll share it on social media, comment on it, link to it from their own sites. This leads to Google rating it – and your domain – highly in search results for a given topic. People then find it (for free, not via Adwords), and because the content is strongly associated with your brand, customers find out about you, see you as a thought leader, trust you and give you permission to tell them about your products.

    As I say, simple (just not easy). But at the heart of the model is the need for content customers genuinely want to read or view. Otherwise the whole model breaks down – the content isn’t actually read, it isn’t shared, no-one comments on it, and all that work just gets lost amongst the billions of other web pages out there. The worst culprits are thinly-disguised adverts for products. If, for example, a product’s unique proposition is that it works in a certain way, then endless articles about how you, the customer really have to work that way – may seem clever (“It’s not about our product really, it’s advice on how we think you should work! Honest!”), but the customers can generally see right through it. It’s hard to give concrete examples, without naming and shaming companies, but just this week I read an article from a company about how their very specific methodology for implementing software development practices was obviously the one and only way of working. So blatant!

    And yes, this gets a tick for being relevant to your company, and the marketing team are happy because they’ve done their job of producing some content about their product, which isn’t just a glossy advert.

    But who would share this article? With their friends and colleagues? “Look Twitter followers, can I share with you a blatant advert for someone’s product?”. I’ve almost never seen this happen in the real world. Why not? Well I don’t know about you but I find it slightly embarrassing sharing what is, clearly, just company marketing with people I know. Of course as marketing folk we’d love this to happen, but it’s not realistic.

    What people share is content that’s genuinely useful or interesting. NB: It has to be in your domain – I could post an article about Radiohead’s new album coming out this summer. Might get a few readers, but they’re not the same people who are likely to want the software produced by my company.

    The masters of this, IMHO, are HubSpot. Look at their marketing blog – http://blog.hubspot.com/marketing. As the headline states – “Where Marketers Go To Grow”. And the articles are exactly that – if you work in marketing they repeatedly write content which I find genuinely interesting and useful. Things that help me do my job better. And I share them with colleagues at work, because I think they might find them useful too. Sometimes I share them on Twitter or LinkedIn. Why on Earth would I do that? It’s certainly not because of some allegiance to them or because I want to promote them – why would I do that? Weird. I do it despite the fact that it comes from a corporation, because of the quality of insight. And obviously because all their articles are about sales and marketing – and sales/marketing software is what they produce – then the impression that HubSpot “Know what they’re talking about” only grows in my mind. And when we came to look at marketing automation software – HubSpot was right up there at the top of our list.

    And this is what you have to produce too.  Stop thinking about flogging your product and start thinking about what people want to read in your domain. If you sell bathroom tiles, what about some articles showing off inspirational ideas for your bathroom from top designers? You can imagine someone sharing this with their other half, “What about this as an idea for us?”. If you’re a taxi service, articles about getting around big cities for new visitors? Shared as “I found this about transport in Detroit, let’s use it for our trip next week”. If you make machine-learning software for banks, how about some simple how-to articles on current methodologies, that a potential customer can share with her boss to explain all the clever stuff she’s working on?

    I don’t think it’s complicated – something in your domain that people actually want to read. Sure, you then need to make sure this leads to opportunities for your business, and that takes a little faith and measurement. But it’s all based on a foundation of great content – without that you’re just doing traditional advertising in a different format.


  • Dissecting Thought-Leadership

    [no title] 1972 by Andy Warhol 1928-1987

    To start, I don’t really like the term “Thought-Leadership”. Like many things in marketing, it’s a bit too “marketing-ey”. It also has echoes of NLP, something I’m not a big fan of, to say the very least.

    But, I guess it’s pretty descriptive for what it means – I’d define it as something along the lines of:

    Providing insight, ideas and leadership in a given subject area, that stretches the limits of the current consensus, driving a subject in new directions and providing deeper understanding.
     

    The reason I’ve highlighted “leadership” and somewhat repeated this idea later in the sentence, is that I believe it’s important to distinguish between this sort of activity and certain types of content marketing which are merely reflecting the current consensus and knowledge in a given area. It also highlights why I think thought-leadership is so hard, particularly if you’re using this as a marketing technique.

    First, to distinguish between thought-leadership and merely “reflecting the consensus” – I think this distinction is based on whether you are genuinely providing new insight and a deeper understanding on a subject, or are you just re-iterating others’ points of views and ideas? There’s nothing inherently wrong with the latter (unless you’re plagiarising of course ? ) – a lot of great content marketing is based on this approach. The example I gave a while ago, of VW providing content on how to keep your car safe in the winter is a really solid bit of content marketing. What they’re talking about (getting tyres, brakes etc checked before the winter starts, checking tread, knowing your revised stopping distances and so on), is hardly pushing the forefronts of engineering knowledge. But does that matter? I think this is really solid content marketing, which will enhance VW’s reputation and draw in people to their site.

    But it’s not thought-leadership. I’m not sure what thought-leadership in the area of car winter safety would actually be!, and I’ve actually struggled to find really good examples, outside of the area I work in. I think this is because, although there are many personalities (particularly in marketing, where the Cult of Personality is rife!), how many of these are providing ideas where you think “Wow, I would never have thought of that! I now, fundamentally see this subject in a new and different way”. Rare, I think. As I say, there are a few I know in my work subject area (database development), but outside that?

    The two good examples I could find in marketing generally are:

    1. Steven Wood at Eloqua (now part of Oracle). Steven has written a couple of great books on marketing automation – Digital Body Language and Revenue Engine. I see these as great thought leadership because he wasn’t just repeating received wisdom on a given subject, but really trying to say something new, and to give a more in-depth point of view on the subject.
    2. Google Analytics blog. The thing I like about the GA blog is that it’s a mix of content but, more importantly, that they do genuinely try to say something new and insightful with many of the posts.

    But, I think there’s a couple of other things these examples have in common, which make them good examples of Thought Leadership – things which are not easy to replicate in a convincing way:

    1. Authority – both are respected sources of information, so you listen. If it was exactly the same content from a.n.other random individual, I think it would be more difficult to get value from the content.
    2. Relevance to business – again, both talk about topics which promote their businesses. As I mention, I think it would be relatively easy to find someone authoritative to talk on a topic of his/her choosing, but is that going to promote what you sell?

    And this is why I think effective Thought Leadership is actually very difficult indeed. You need to find someone who fulfils the following three requirements:

    1. Knowledgeable/insightful and able to push the topic forward,
    2. Is a respected influencer in the community,
    3. Is willing to talk on the subject that promotes your business.

    You can throw money at the problem of course, by hiring some big names. But even then, if you hired someone very expensive and authoritative in a specific area, but that person wasn’t already very interested in the subject matter of your business (criteria 3), you’ll still struggle to get good thought-leadership from that person.

    An alternative is to grow someone from within. May be your CEO would be willing to tour the world talking about a given topic, writing blog articles on the side to support this. Or maybe you’ve got some very smart internal people who are already authorities on a subject, but you didn’t know it. All are options, but as I say, if you fulfil the three criteria above, you’ve a lot more chance of having a real impact, rather than just pushing out content that few people read..


  • Is it the Content or the Author that Matters for a Blog?

    St Jerome

    If, like me, you subscribe to a lot of marketing RSS feeds, then you can’t have failed to notice the almost overwhelming proportion of posts about content marketing/SEO and how choosing appropriate and useful content for, say, your blog is a killer way of drawing in early stage leads. This SEOMoz post is a good guide to improving your blog reach, for example.

    Hard to disagree with that. However, we’ve been working on a blog at our company recently (The Future of Deployment if anyone is interested!) and, if I look at the traffic on that site over time, it struck me that actually there were three primary things that affected the impact/success of any given post (NB: the blog is at an early stage, so we’re talking about the generation of new traffic here, rather than reads from existing subscribers). In approximate order of importance (IMHO):

    1. Who the author is,
    2. Whether people link to it (because it’s interesting),
    3. The SEO credentials of the content.

    To explain these a little further – the first is reasonably obvious. We’re lucky at Red Gate to have prominent, well known people who can write for us. When they blog, they promote their posts, everyone thinks “Hey, Joe Bloggs has written something new, I’ll check that out” and we get good traffic.

    The second source of traffic is, as it says, about whether the community out there, or others (for example our sales people), find the content useful enough to either link to the post, or send links to the post out to the public.

    And third is the rather narrow task of using a tool like Yoast’s SEO tool for WordPress (which I love BTW) to get the SEO right on your blog post.

    Of course, if you can, you work on all of these – you get great people writing for you, you reach out to the community to get the word out, and you work on your SEO representation. But, as ever, we only have limited time, and my conclusion, from looking at what works, is that, basically, it’s the 1st item which has seemed to work best for us. So, if you want traffic, it’s not what you write that matters but more who writes it. This then leads to the next logical conclusion which is that, perhaps, if you want to really make a success of a blog, time directed at making friends in your community, particularly with people who are already very well known, may be considerably more valuable than time spent trying to get your content just-so.

    Of course what your doing here is buying in their reputation, hoping some of it will rub off on your blog, then your company. But this seems reasonable – there’s an awful lot of great content out there (a good post here, from Hubspot about how “Everyone’s doing it, so you’d better be good”) and actually a lot of it is reasonably obvious “no-brainer” stuff (e.g. I must have read 100 times, “Make your content interesting if you want people to read it” – thanks for that). So how can you differentiate? If someone who you respect, who you trust, and who you know is an expert in his/her field is telling you something, you’re far more likely to listen, and hear what’s being said above the noise.

    So may be your time is better spent taking some of the hot dogs in your industry out for expensive lunches, rather than hunching over a laptop trying to craft the perfect erudite post on a given topic. Ah, the sacrifices we have to make as marketers…