The fine line between sharing and self promotion

There is no doubt that digital technology has greatly enhanced our ability to share and connect with others. Whether it be email or social platforms such as Facebook, Twitter and LinkedIn, we are more connected than ever before. As the ease of connection has grown we have expanded our networks beyond the tradition inner circle of friends and family to include many ‘weak ties’, people we’ve met at networking events, people who found our profile online, people who’ve reached out to us and we felt obliged to accept their ‘friend’ request lest we hurt their feelings…people we would struggle to recognise on the street.*

*Professor Robin Dunbar famously determined that we can only maintain 150 meaningful relationships at any one time. This was termed ‘Dunbar’s number’ and has been shown to apply online in much the same way as it does in real life

Sharing with an audience of people we don’t know well is impacting how we communicate. For some, it means sharing less on public platforms, unsure of who is listening and what people might think. For others, it’s carefully curating the content we post online to highlight the best parts of their life and work. And for a few, it is a genuine and meaningful opportunity to expand reach and impact.

But the real risk that lies within these expanded networks is that we stop caring as much. Rather than considering them as friends or acquaintances we start to think of them as an audience (either a personal or a professional one). We can still pinpoint close friends and relatives within that network, but when we consider them as a collective, the number of weak ties often outweighs the number of people whom we care deeply about…and we don’t have the capacity to care about them all.*

*The definition of care is ‘the provision of what is necessary’ and I don’t believe we can show true care for others without taking the time to understand their personal interests and needs.

And so just like an actor treats their audience different from their loved ones, we start doing the same. We play a part for our audience that is different from what we show in private. We seek approval…and we self promote.

The line between sharing and self promotion is a fine one. From the outside they appear much the same but the intent is so very different. Sharing is done from a position of generosity to help the people we care about. Self promotion is what we do to make people like us and remember us…and to confuse matters further, sharing will generally result in some element of self promotion, and self promotion always requires some form of sharing.*

*Case in point is this post. As much as possible, I’ve tried to write this from a position of generosity, to articulate a problem I see many of my peers dealing with and help them find a way past it. But if we are to assume for a moment that it achieves it’s objective, then there is also little doubt this post will also serve to promote me. 

This fuzziness between sharing and self promotion is not just theoretical, it’s a problem I’ve been struggling with over the last few months.

About a year or so ago I started working with Mykel and Dave Dixon (aka The Dixon Effect) to produce a short video that articulates the motivation behind the work I do. It was based on an awesome video that they had done for a good friend of mine Dr Jason Fox, a video that beautifully captures his wonderful complexity and thoughtfulness.

I acknowledge that my willingness to fund the project was not altruistic, it was conceived of for promotional purposes…but along the way the intent changed. The original script was rewritten, Mykel composed new music and Dave reshot some of the video because I felt so uncomfortable with the self promoting elements in the first cut…so uncomfortable that I knew I wouldn’t be happy sharing the video once it was finished.*

*The final product is more a call to action about the choices we make with technology than it is about me. I wanted people to see that making smart choices (or any choice at all) about how we use our digital tools can improve balance and quality of life. 

I received the revised video a month or two ago but have continued to struggle with how and when it is OK to share it.

This dilemma has meant that apart from one little airing on Facebook the video has spent most of its life sitting dormant on my hard drive.

So where does that leave us?

The fuzziness between sharing and self promotion means that only we can determine whether what we post online is done from a position of generosity or selfishness. The fuzziness also means that we will always be able to pretend to others (and ourselves) that one was really the other, but if we continue to operate from a position of selfishness we will ultimately devalue our networks, including the people in them that we genuinely care about.

So with that in mind, I’m sharing my video with you now in this post. I’m sharing it because I think it is a good example of the fuzziness that we are all grappling with when it comes to social media. I’m sharing it because regardless of the self promotion, I believe the message is an important one…

…and I’m sharing it because if you like the video and you find it valuable, well maybe you will like me just a little bit more as well.

This blog post has been syndicated to Medium. If you’d like to add comments or ideas, head over to this page.

Your technology doesn’t care about you

There has been much written over the last few years of the threat that artificial intelligence and other emerging technologies pose to existing employment. There is no doubt that there have been incredible achievements in these areas, consistently outperforming people who would normally be considered the ‘smartest in the room’.

There is no doubt that smart machines will become increasingly prevalent in our lives but they have a significant shortcoming that is unlikely to be addressed any time soon.

At the end of the day, the machines don’t care. They don’t actually give a shit about you and the impact their decision has.

…and to illustrate this point we need to discuss the biscuits I baked on the weekend.

In actual fact, my wife, Naomi, baked the biscuits.

Perhaps the most well known of these smart machines is IBMs Watson which, in 2011, beat not just two of the smartest people in the room but two of the smartest people in the United States. In a special edition of the game show Jeopardy, Watson was pitted against the game’s best ever players, Ken Jennings and Brad Rutter (both previously undefeated champions), and beat them…twice.

Since graduating from quiz shows, IBM’s Watson technology has been applied in a whole bunch of different ways including helping doctors diagnose cancer, facilitating tax returns and providing advice on where to get the best Chinese food. But of all the applications perhaps the most intriguing has been Chef Watson. Chef Watson is what you get when you combine machine learning with a large database of recipes. By parsing 30,000 odd recipes Watson started matching which ingredients work well together. The free Chef Watson app then combines this with Bon Appetite magazines recipe database to generate new and intriguing recipe combinations.

Last weekend I was speaking at the Mindshop conference in Sydney and thought it might be nice to bake some biscuits for the delegates.* So after browsing Chef Watson and disregarding the biscuit recipes that included pulled pork and other types of meat, I settled on cream cheese, red onion, pecan and raisin cookies. According to Watson these ingredients have a 98% synergy.

*OK, anyone who knows me knows this is a lie. I don’t bake, I only cook. I think this comes down to two things, firstly, with cooking you don’t need to really follow instructions (and I don’t like following instructions), and secondly, I will almost always order the cheese platter over the dessert. So in actual fact my wife, Naomi, baked the biscuits.

 After handing out the biscuits at the conference the feedback I got was close to unanimous.

“Mehhhhh”

They weren’t terrible but they clearly weren’t great. And the truth is, Watson doesn’t even care. On the other hand, if I’d called my Mum and asked her for a recipe she would have taken the time to find out the type of biscuits that I wanted, gone through some options helped me narrow it down…and then probably baked them for me.

Now the theory is that if I then gave my feedback on the recipe back to Watson it could then use that to fine-tune the algorithm and serve me up a better recipe next time. On the Chef Watson website IBM suggest “Chef Watson really needs you to use your own creativity and judgment”…but all this is really doing is outsourcing the care to users. At the end of the day if Watson doesn’t care about us, why should we care about it.

So what does this all mean?

Most jobs involve other people and as a result require more than just facts, answers and judgement. They require a full range of emotions including compassion, empathy and care – all of which are required in any relationship worth having. Even call centre work (which is generally regarded as one of the most likely short term victims of AI and machine learning) is not immune from this. A study by Duke University suggests that customer satisfaction within a call centre environment is overwhelmingly influenced by how they were made to feel (81%) rather than the information that was presented (19%).

It is through human connection that we may gather information that in turn feeds our AI systems, and it is through human connection that we will also get people to understand and accept (or debate and question) the advice that is generated.

And what will make human employment safe for the foreseeable future (though potentially in different form) is that unlike technology, care and human connection is far more difficult to scale.

Following the match Ken Jennings wrote a piece for Slate magazine which included this:

“Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad and I were the first knowledge-industry workers put out of work by the new generation of ‘thinking’ machines. ‘Quiz show contestant’ may be the first job made redundant by Watson, but I’m sure it won’t be the last.”

I have no doubt that Ken is right but it also highlights both the strength and weakness of smart machines such as Watson. They excel at jobs where success is defined by facts rather than feelings. In a quiz show, the only thing that matters is whether you are right or wrong. But in most jobs, being right or wrong is often just one small part of the equation.

This blog post has been syndicated to Medium. If you’d like to add comments or ideas, head over to this page.

Where’s your humanity?

WARNING! LONG POST

[Insert cup of tea here]

In a month or so I’m delivering a keynote entitled ‘Will technology make us more human?’ It’s a keynote I’ve had in my speaker guide for over a year but until now, no one has actually booked me to deliver it. I’m not sure why that is. It feels like a discussion that many organisations need to start having. There is a very real risk that, without clarity on what we want from our technology, we will ultimately accept anything we are given.

When you delve into any news report and research about our emerging but unknown future, a future where we face being outsmarted by our technology, you piece together a story that goes something like this. Sometime in the next 15 years you have at least a one in three chance of losing your job to a robot or AI. This will be a challenging time, you might try and re-skill into something more current like coding (it’s the new blue collar work) but as technology keeps getting better it will be hard to keep ahead of AI. At some point 20 to 30 years from now it will be deemed that the singularity has arrived, meaning that Artificial Intelligence has surpassed human intelligence at which point we will either need to merge with AI if we want to remain relevant or face becoming technology’s ‘pet’.*

*On the flip side of this doom and gloom is the argument that many of the jobs that face being automated weren’t that great anyway. And I’m not just talking about monotonous factory work, the good news is many lawyers and accountants face automation as well

But something important is missing from this view of the future, and that is…why? What’s the point of all this technology driven productivity? What is it that we want out of life? And before we decide to merge with AI or upload our consciousness to a hard drive, what will we potentially lose or leave behind?

At the core of all this is a question that’s been bouncing around in my head for some time now and that is ‘What does it mean to be human?’ As technology continues to encroach on the activities that we once considered the domain of people, it is reasonable for us to question what it is that makes us special.

Now bear with me. From a philosophical perspective we often use the word ‘human’ in a contextual way. From an evolutionary biology perspective it might mean ’not an ape’ but from a interpersonal perspective it might mean ‘fallible’ (as in ‘we’re only human’). Ultimately, being ‘human’ is being similar to how we see ourselves. Which leads us to an important point, technology will never be human (no matter how good it gets) because it would undermine our own sense of identity. Kiwis hate being considered the same as Australians and Canadians hate being confused with Americans…but everyone would feel a little bit hurt if, during a phone call, someone thought they sounded like an automated answering service.

So, what is human is ultimately defined by what our technology is not.*

*This is compounded by the fact that once we create a technology to do something the value of that thing falls. This is a basic supply and demand equation, technology makes things more abundant and ultimately the value falls. When we didn’t have mechanical tools, physical strength was valued. When we didn’t have calculators, mental arithmetic was valued. And while AI is still in its infancy we will still value certain types of knowledge and expertise such as what you learn in eight years of medical school. 

In this sense, the definition of humanity continues to evolve. In our not too distant past, physical prowess paid a far more significant role in defining our humanity. The Alpha Male is a throw back to when the ability to lift heavy things and swinging them around your head (like, say, a sword) had a significant impact on both our personal success and our value to others. But with the advent of steam power and the flourishing of mechanical technologies, physical strength meant less and less.

In fact, with the first industrial revolution came a revolution in humanity. We came to value people for their brains more than their bodies. Bodies couldn’t compete against the technology of the times and as a result brains became the new competitive advantage.

In his book Unnatural Selection: Why The Geeks Will Inherit The Earth author Mark Roeder argues that many traits that were previously considered detrimental to human survival such as Asperger’s syndrome, ADHD or being on the autism spectrum have now become an advantage. This is not to say that physical appearance no longer matters, but rather that ‘the book’ is not ‘the cover’.

But this is neither the end of evolution in either technology or our definition of humanity. The rapidly emerging field of AI is casting a shadow across what were once greatly valued mental feats. We can no longer compete again computers in Chess, Go* or Texas Hold ‘em. Computers are helping diagnose cancer, completing our tax returns and even recommending where we can get the best Chinese food.** So if the geeks can’t outsmart our technology who get’s to inherit the earth?

*It is interesting that during one of games between the world champion of Go and Google’s Go playing AI, Alpha Go, a response to one of the moves by European champion Fan Hui was “It’s not a human move. I’ve never seen a human play this move”
**In fact that’s all being done with just one AI called Watson. Just don’t ask Watson what’s for dinner, his food suggestions have been generally less than appetising.

Notwithstanding the potential risks to the very survival of the human race that unfettered AI brings, it is perhaps time to once again redefine ourselves and embrace the next chapter in human evolution. Just as in the past, the things we will value going forward, the things we will choose to associate ourselves with, are the things that our technology can’t do for us. This will include traits such as empathy, love, ingenuity, ethics and, perhaps even romance.

Which is a lovely segue to the Business Romantics.

Perhaps the highlight of my last two weeks has been The Business Romantics tour I went to last Friday in Melbourne, The tour was hosted by Mel Grablo of Talking Sticks and Mykel Dixon and featured the amazing Tim Leberecht. What was truly inspirational about this event was not just the content (which could have just as easily being downloaded via YouTube or read on a Kindle at greater convenience) but Mel and Mykel’s commitment to creating an event that rejected established norms (read logic) and catered to an emerging humanity.*

*For someone who speaks at a lot of business conferences it was the first time I’d seen a three piece band to accompany the speakers, a host with a grand piano, a resident artist, an unscripted half hour slot for audience contribution…and a whole lot of wasted catering when this overtook the afternoon tea break.

In his keynote Tim made one particular point that stuck with me. The Romantic period of art and literature was a direct response to the obsession with empirical evidence and the scientific method that emerged during the industrial revolution. We are now in the midst of a new industrial revolution (the fourth apparently) and echoes of the same overt focus on productivity, logic and data can now be seen throughout society’s (and most strongly in business).

But just as data and logic failed to complete our understanding of humanity 300 years ago I believe it will fail again now. This is not to say that there isn’t value in scientific pursuits but rather that parallel to these pursuits we need something else, something more, something that is difficult to automate and therefore retains it’s inherent value.

Our value has always been in our humanity, even if our understanding of what this means has changed over time. I believe we all need to start exploring what we want humanity to mean next. Failure to do so leaves us open to both replacement and control by AI and other emerging technology. In which case, we better hope our future AI keepers like having pets.

This blog post has been syndicated to Medium. If you’d like to add comments or ideas, head over to this page.

Should we be doing a thirty hour week?

I grew up in Cervantes, a small fishing town in Western Australia where my dad was lobster fishing. One thing that my dad would always do on his boats (especially if someone else was going to be driving it) was limit the revs (or speed) that the engine could run at. If you rev an engine higher for longer you not only use more fuel, you also increase the rate of internal wear and the risk of long term damage. By artificially restricting the revs to an optimal level, the engine would operate more efficiently in the short term and be more reliable in the long term.

I think one of the big challenges we face with technology is that it’s allowing people to rev both faster and longer. Not only are we trying to get more done in each and every moment, we are also taking our work home with us and continuing it after hours and on the weekend. In the short term we might feel that we’re getting more done but we are experiencing diminishing returns on the time we invest, and over the long term there is potential for some serious damage to be done.

A recent study showed that workers in smaller Australian mid sized businesses were doing an average of 10.7 hours per week outside of normal business hours. As a business owner, these free hours might sound awesome but the truth is many of these hours are not that productive. In fact for the average worker doing an eight hour day only three hours of those are generally spent doing meaningful work.

Over the long term, our inability to disconnect is also impacting the quality of our relationships and in turn our happiness, health and well being. This in turn has a negative impact on our work. Those who work 55 hours a week rather than 40 were 21% less engaged and 27% less focused, often compelling them to put in a few extra hours to make up for the unproductive ones…

…and oh how the vicious circle continues especially now we are always connected, always contactable, always on.

So what’s the alternative? Well a Swedish software company Filimundus last year experimented with reducing the work day to six hours (whilst paying their people the same money). It has been successful enough that they plan to continue it and anecdotally report that there has been no perceivable drop in productivity i.e. their people are generating as much output from six hour as they use to get from eight…and they are more happy and engaged when they do it.

This is the same experiment that my team and I have now embarked on. Can we reduce our hours, improve our quality of life and still get the important work done? We are only a few weeks into the experiment but I feel there has already been significant benefit. Primarily, it has given me permission to seek better balance. As someone who works for themselves, there is always work to be done. The 30 hour target immediately gave me permission to switch off, take breaks, go for a swim or a walk, have lunch with Naomi or go to the movies with the girls on a Friday afternoon. In addition I’m more conscious how I spend my time when I’m actually working. I have two hours less each day which leaves less time for procrastination and low value work.

I want to start work when I’m ready, finish when I want, and get as much done as I can in between.

But this is not just an issue to be addressed by the self employed, it is just as relevant for larger organisations. One of the biggest fears I find amongst organisational leaders is the inability to escape their technology, and subsequently their work. Yet it is often the decisions that are made (or not made) at the top of organisations that are perpetuating the problem.

They’ve supplied employees with laptops and smartphones.

They’ve let their people to take work home on weekends.

They haven’t questioned emails sent by their direct reports late on a Sunday night.

And there’s still an expectation that everyone be in the office 8:00 AM on Monday morning.

The opportunity of technology was one of improved flexibility, not a hope of bonus productivity. And although this extra work may not have been requested, it is ultimately endorsed through its acceptance.

This is not a post aimed at discouraging the use of technology; it’s a post aimed at encouraging us to use it in the right way. As pointed out by the director of Melbourne University’s centre of workplace leadership, professor Peter Gahan

“When it is planned for well, you should be able to get the same levels of productivity out of people working shorter hours with more technology, and so on, than you used to get out of eight.”

Why not cash in some of our technology dividends and also take advantage of the flexibility that technology provides. Many of us now have a choice as to when and where we work which means we don’t need to turn up to an office and try and complete everything in one stint (either an eight hour or six hour one). We can work from home, break up our day…work when the inspiration hits us and in doing so work less hours and get more shit done!

This blog post has been syndicated to Medium. If you’d like to add comments or ideas, head over to this page.

Spending your technology dividends

In much the same way as we look at financial investments, putting our time, energy and money into technology should be done so with an eye on making a return. Whereas financial investments generally create a financial return on investment in, technology can generate a variety of benefits. Broadly I describe these as technology dividends.

Broadly there are four types of dividends; flexibility, productivity, monetary and quality.

Flexibility dividends arise because technology now allows us to work more easily across geographical and even chronological boundaries. Although many roles still require co-location with particular equipment or people (it is still incredibly difficult to be a work from home neurosurgeon or work from home mechanic) an increasing number of jobs can be done remotely and in different timezones.

Productivity dividends accumulate as we employ technology to do things faster. This could be as simple as sending an email as opposed to writing, printing, stamping and posting a letter or it could be through avoiding unnecessary travel by using Skype for a meeting. In both cases technology allows us to save time compared to what we did in the past.

Just as technology allows us to do things faster, it also allows us to do things cheaper. Monetary dividends are the cost savings we generate as a result of employing technology. Email is not only a faster way of sending a message compared to writing a letter, it also incurs a small fraction of the cost. The cost saving opportunities of technology include everything from cheaper airfares through easier comparison of prices, to cheaper music via streaming vs purchasing CDs.

Quality dividends are a result of doing better work (often whilst also doing it faster and cheaper). We can shoot HD video on our smartphones and edit it on free software cheaper, faster and more conveniently than a professional videographer could do five years ago. And we can make better, more informed and ultimately more valuable decisions because higher quality information is at our finger tips.

As technology continues to improve and we invest more time into using it, the question arises as to how we are going to spend our dividends? Much like with financial investments, some of these dividends might be reinvested, at other times we might want to cash them out.

When people say they’re not good with money it generally means that they don’t know how to spend it well, not that they don’t know how to earn it. I think the same applies to technology. People who think they aren’t good with technology probably lack intention on how to use their dividends. Rather than than the flexibility to work from anywhere and when they want, they end up working everywhere and all the time. Rather than using their productivity to reclaim some time, they end up filling time with more work.

Last year both myself and my business manager Sunny cashed in part of our flexibility dividend. I used mine to move out of the city to the Mornington Peninsula where I work in my backyard studio, Sunny uses hers to remote work from her home in Manila. This year myself and my team are going to cash in some of our productivity dividend and experiment with a 30 hour work week.

What technology dividends have you generated and how are you planning to use them in 2017?

 

This blog post has been syndicated to Medium. If you’d like to add comments or ideas, head over to this page.

What can you get for $50?

Promotional videos once cost tens of thousands of dollars, required a professional videographer, and access to a hundred thousand dollar professional video editing suite.

To get value from the investment, videos focused on generic content that appealed to a broad audience. Much like a billboard on the side of the road, the primary objective was brand awareness.

But in a world of information overload and increasingly scarce attention, promoting brand awareness is just noise.

We need targeted messages and niche offerings that offer genuine value
every    time    we    share*

*  Note: Posting an endless stream of motivational quotes in social media is not adding value to anyone.

In contrast, this video focuses on one idea for one audience.

It was created in an afternoon

shot using a smartphone (but in full HD)

was edited on a laptop

by a guy in the Philippines

for $50

As technology becomes cheaper, faster and more reliable it allows us to package and share valuable ideas that solve one problem at a time. As a result markets are narrowing and services are becoming niche.

Digital is driving a future where providing the perfect solution to a few is a far more valuable strategy than providing an OK solution to many.

Eyes and opportunities

Just as day follows night, the exponential growth in computing power is creating an exponential growth in digital opportunity.

The eyes have it_6x4
There are more apps, platforms, devices and integrations than ever before (and there are many, many many more still to come) and each one might be an opportunity to do things faster, cheaper, and better.

But…

in most organisations the responsibility for digital is limited to the IT department. Which means the number of people tasked with identifying and acting on these opportunities is growing linearly at best and is stagnant at worse.

So…

If you don’t have hundreds of digital opportunities on your radar coming it doesn’t mean they don’t exist, it just means

                             the number of opportunities exceeds the number of eyes looking for them.

In the future (and when I say ‘future’ I mean ‘now’) everyone will need to start taking responsibility for digital.

 


  1. Computing power defies expectations
  2. Number of apps to double (thankfully some are not crappy games)
  3. IT workforce is growing…just very slowly
  4. Are robots more aware of what’s going on than you?

We need to learn out loud

One of my favourite books of the last few years has been Smarter Than You Think by Clive Thompson. In the book he introduced me to the concept of thinking out loud. To paraphrase Clive (badly), thinking out loud is the process of putting incomplete thoughts and ideas out in to the world so that like-minded and otherwise interested people can contribute to them, and in the process help you both learn. The true value of thinking out loud is the learning that comes from it.

power-level

 

I would argue that in an increasingly dynamic (and may I dare say disruptive) work environment the ability to learn out loud is not only valuable, it is fundamentally required.

To understand this, let us take a moment to look at antithesis of learning out loud, which is quite obviously, learning quietly. Now this may not be a concept you have heard of before (ba-boom) but it is a type of learning that you are all too familiar with. Learning quietly is what we were taught to do at school, it normally involved listening to a teacher, reading a text book or taking a test…all in hushed silence. Although there is some research to suggest that overly noisy environments can be disruptive to learning, this is not what we are talking about. Learning quietly is not about the environment we learn in but the way in which we learn.*

*This is not an introvert/extravert thing either. I would argue that thinking out loud is just as relevant for both, but for introverts there might be larger doses of self reflection in-between. 

This quiet, studious approach to learning might have worked in a world full of ‘facts’, when the ’truth’ was printed and bound into text books and the teacher’s role was to recite and the learner’s job to digest and regurgitate. Am I talking down learning quietly? Well, I suppose I am. The more I reflect on the close to two decades I spent learning like this I am not sure it served me that well.*

* I am including universities in the learning quietly approach as this was still the dominant form of learning that I experienced there. Case in point, when two of my friends asked if they could do a joint PhD on collaboration they were turned down…a PhD on collaboration could only be done by one of them because the university wouldn’t be able to determine who did the work and therefore who ‘made the grade’.

In contrast learning out loud is a collaborative approach, as pointed out earlier, it involves putting incomplete thoughts and ideas out into the world and getting feedback. Learning out loud is the cognitive equivalent of learning by doing. It is a proactive and iterative approach that involves making mistakes and adjusting accordingly. It is best suited for complex environments where the answer is not known, and often not knowable. According to Dave Snowden (who has an extraordinary ability to make complexity simple) the best strategy to employ in such complex situations is Probe – Sense – Respond. Take action (probe), determine whether the outcome was good or bad (sense), then if it is good, do more of it, if it is bad, do less (respond).

When we Learn quietly we do not probe, instead we are relying on other people to do the probing for us and just hope we get to read about it in a blog article or text book later on. But regardless of how similar another person’s circumstances and experiences are to your own they will never be the same, and as a result the outcomes will also be different.

So beware of false prophets when it comes to technology. Any vendor peddling the perfect answer, a turn key solution…or uses the words ‘best practice’ followed by just about anything, is primarily selling jargon. Digital technology offers wonderful opportunities but you are ultimately going to have to have to take some responsibility for learning it and implementing it for yourself…and if you’re going to learn, can I suggest that the best way to do it is to find a group of like-minded (or otherwise interested) people, and learn out loud.

You’re not competing with technology

There has been a lot of media recently focused on the impending jobs apocalypse being brought about by digital technology* but there is something just a little simplistic about the idea that technology destroys jobs.

*The most recent example of this I’ve seen was an article in The Conversation as the justification for a Universal Basic Income sent to me by my good friend Kath Walters.

Fagor Automation

The reality is that (at least for now) most jobs are far too complex for technology to do. They consist of multiple objectives, hundreds of actions and thousands of discrete tasks. Although the wonders of technology make almost anything possible, it doesn’t necessarily make everything probable.

At the end of the day, every robot (whether a physical robot or a software algorithm) that competes for our work is developed by an individual or a company with at least a sideways glance at future economic return. Programming a robot to replace your specific jobs is therefore not a great use of their time. It would be both expensive, because of the complex interaction of objectives, activities and tasks, and uneconomical because your unique combination of these things mean that producing a robot to replace one specific job generally lack of scalability.*

*The exception to this is when one job consists of the same set of tasks completed over and over again…and when this job is done by a whole bunch of people such as the work of taxi drivers or truck drivers, well this is generally enough scale to warrant significant investment.

In my book, Analogosaurus, I pointed out that it was much more likely that technology would take tasks away from us and recent research by McKinsey’s seems to back up this premise. In fact their research suggests that for more than two thirds of the current US workforce, 30% or more of tasks could already be automated using off the shelf technologies.

Yet at the same time both jobs and companies are disappearing at the fastest rate in recent history. So if you aren’t competing with technology for your job then who are you competing with?

Well, that’s easier to answer. You’re  competing against other organisations that are using technology much better than you do.

Image credit: Fagor Automation via flickrCC

When the drought turned to flood

I read an article recently that said that following the Afghanistan war, the US Army completely changed their information sharing policies. It turns out that the Army had important information that could have saved lives and assisted their own troops but the ‘need to know’ approach to information management meant that the right information didn’t get to the right people fast enough. The new approach is to share information on a ‘need not to know’ basis. Rather than ‘is there anything I should tell you?’ It is now, ‘Is there anything that I should hold back?’

armed-forces-600x400

Over the last couple of decades, we have moved from a world of information drought to a world of information flood. When you are in drought, you hoard the things that are scarce and the ownership of this resource is a source of power.  The problem is the same hoarding mentality doesn’t serve you well in an information flood. Hoarding information just means you are more likely to drown in it.

[tweetthis url=”http://bit.ly/1PRgA0X”]In an information flood, value is created by diverting the resource to where it is most needed.[/tweetthis]

In an information flood, value is created by diverting the resource to where it is most needed. In the US Army’s case, this was the troops on the ground. Think about which of your mindsets and approaches to information management are based in a scarcity or drought mindset rather and an abundance or flood mindset. Are they still serving you or is it time to update your tools and techniques to a world of information overload?