Tuesday, 22 August 2017

Artificial Intelligence: what will they think of next?

Back in the 1980's, computers started to hit the masses with products like the Sinclair ZX80, the Commodore 64 and the BBC Micro. Many kids at school got the computing bug. This was nothing to do with gaming (that did not exist yet), it was all to do with coding.

There was a saying which neatly summed up the interactions with those early computers:

Computers don't always do what you want. They always do what you tell them.

The apparent "intelligence" of a computer was not derived from the computer itself but from the intelligence of the coders who wrote the programmes which it ran. So if you wrote bad code, the computer appeared to misbehave. My computer spent a lot of time on the naughty step.

I remember being impressed and amused when I came across an early computer running a programme called Eliza (developed originally in the mid 1960's). Communicating via the keyboard, you could actually converse with an imaginary person within the computer. The responses implied some form of intelligence, but after a short period of time, you realised Eliza was pretty dumb but nevertheless likeable. Eliza was simply a set of algorithms and rules which interacted with the patterns in your text in order to mimic intelligent conversation.

One of the most significant definitions of thinking machines came from Alan Turing (best known for code breaking in the second world war and for being the main character in the film The Imitation Game). He proposed a test in which a machine would be set up to converse with a human remotely via a screen and text only. If the human could not detect that he was conversing with a machine, then the so called Turing Test would be passed.

Eliza was the first example of a machine passing this test but was certainly not an example of artificial intelligence.

There is no universal definition of artificial intelligence, but here are some characteristics which commonly apply:

A machine, computer system or software which can:
  • carry out tasks normally requiring human intelligence
  • think for itself and make informed decisions
  • learn for itself, adding to its pool of knowledge beyond that of the original programme
  • generate original ideas
  • make predictions based on analysis of past (or perhaps simulated) experiences

There have been philosophical debates about whether man made machines can truly "think" going back to Socrates and Plato. Questions of sentience, feelings and the possession of a soul by machines have been explored by science fiction writers like Philip K Dick in "Do Androids Dream of Electric Sheep" through to the adventures of Data in Star Trek the Next Generation.

The challenge we are about to face is more down to earth. Philosophy aside, if a machine can accurately mimic the behaviour of a human, think and carry out tasks for itself, then why not have it do those jobs which humans are doing today?

We are living in an information age. The vast majority of us are "information workers". Robots in this context are not restricted by physical form and artificial intelligence can already carry out information worker tasks as well or better, and certainly cheaper than humans. Eliza is arguably coming around to haunt us in the guise of the latest generation chatbots. These 21st century editions are not constrained by keyboards and text (although you will find this communication mode used on websites), nowadays, they can use natural speech to converse with customers and even translate between languages. We are even seeing 24 hour shopkeepers and helper systems being put into homes with names like Alexa, Cortana and unimaginatively, Google Home.

These forms or artificial intelligence are still very basic and have no feelings for sure and will quite happily make you redundant. Predictions about how many jobs are truly at risk vary enormously but have one thing in common - big numbers. I am heartened by one of the lessons of history. Back in the 60's, around the time Eliza and myself were being conceived, household appliances like washing machines, blenders and toasters were set to make life easy. Similar advances at work including the rise of computers were going to lead to three day weeks and a leisurely life of luxury. This did not come to pass. Instead, ever increasing demand for more machines, computers, blenders and all manner of other things has led to shops open on Sundays, 60 hour weeks and a whole lot more stress in our lives. My point is, demand for human labour does not diminish, but the roles in demand will change.

Artificial intelligence is already moving beyond the bounds of the digital world and into the physical one. Driverless cars are a reality and if you are thinking, well I haven't seen one, I can assure you it won't be too long. Southern Rail staff in the UK have been going through a long painful battle against the introduction of trains with driver operated doors on the grounds that not having a guard do this reduces passenger safety. A time will come when driverless trains will prove to be safer than human operated ones and unfortunately, train driving as a career will be heading to a siding.

If you are thinking this topic is getting a bit dark, you ain't seen nothing yet. In 2016, one of the greatest thinkers alive today had this to say about AI:

"The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race,"  Stephen Hawking in an interview with the BBC.

The book by Nick Bostrom, "Superintelligence: Paths, Dangers, Strategies" has become a bestseller. Bostrom is the founding director of Oxford university’s Future of Humanity Institute and so spends a lot of time thinking about this stuff. Not all agree about the timing of when computer intelligence will match that of a human but sometime well before the end of this century seems to be a consensus. Bostrom does not predict it will all go wrong but highlights a number of possible ways that it could.


In the meantime, trying not to have nightmares about Terminator coming true, artificial intelligence is on the cusp of making a real difference to our lives both at home and in our work. Computers may start doing more than what we tell them and actually start showing some initiative. And that has surely got to be a good thing.


Gartner's latest view on AI - here

Wednesday, 9 August 2017

What is Docker containerisation

Docker is an open source platform which encapsulates applications into highly efficient “containers”. Many containers can run on a host without interfering with each other. The community edition is available for free.

3 minute video version of this post is here.


Docker Inc – is a commercial organisation which sells Docker enterprise edition to organisations which need tools which are fully tested, validated and supported for use in an enterprise setting.

You may be familiar with server virtualisation from vendors like VMware and Microsoft. It enables the creation of virtual machines. Many virtual machines can run on a single host without interfering with each other.

It sounds a bit like Docker containers but the origin of server virtualisation is different to that of containers and on the inside they are different too.

The adoption of server virtualisation has been driven by the needs of IT departments to drastically reduce IT infrastructure running costs. Physical servers historically ran a single application to ensure no conflicts occurred between them. With requirements for lots of applications, that meant lots of expensive servers, all needing space, power and cooling. Advances in computing power and the development of virtualisation technology means that a single host server can replace many physical servers and contain many virtual machines - each with its own operating system and application.

Containerisation, on the other hand, has been driven by the needs of application developers. As businesses increasingly use digital technology to become more competitive, so the need to update and add new applications faster has become critical. It is a technology which is linked to the rise in DevOps, whereby businesses are putting dedicated teams in place to develop new applications. These projects must easily migrate from test and development phase into production so that they can run reliably and at scale. Ongoing application development continues to require the code to move between environments seamlessly.

A big overhead when developing applications turns out to be the IT infrastructure itself.  Software is typically developed by teams using a range of platforms like PC's, laptops, servers and even cloud based systems. These may not be compatible. Each platform move requires customisation and testing for reliable operations. This really slows down progress and hinders collaboration across developer teams.

Containerisation starts with the application. An application does not need an entire dedicated operating system, it can make do with a cut down set of files called “bins and libraries” – together these can be put in a container. The container needs access to a shared operating system plus the Docker software. The operating system is typically Linux with Microsoft Windows a more recent option.

Containers don’t each hold an operating system so they are much smaller than virtual machines and many more can fit into a single host. A container is much faster to start up and easier to maintain than a virtual machine, helping developers be more productive.

Although virtual machines can be easily moved around between hosts or even out to the cloud – this is typically under the control of IT departments. This functionality often comes at a cost. With containers, the developers don’t need to ask IT for help. Sharing containers, jointly developing applications and moving them from test to production to the cloud – and back – is quick and easy.

So when comparing containers with virtual machines – its less to do with what they are, and more to do with why they came about in the first place. 

 - Server virtualisation is helping IT departments do more for less.
 - Containerisation, led by Docker, is helping developers do more, faster.

Luckily they can also co-exist, containers can run inside virtual machines -  so organisations can benefit from the best of both. 

Although Docker is not the only container technology, it is the most widely adopted and the most talked about. More information is available at www.docker.com

Sunday, 19 February 2017

Internet of Things: the WHAT and WHY

There is a lot of hype about the Internet of Things. There must be something in it - but what?

50 billion devices connected to the Internet. 10 trillion Dollar opportunity. 17 zettabytes of data storage. 

I can read (or make up) statistics as well as the next person, but do these figures actually mean anything? I suspect they are somewhat missing the point.

In order to make some sense of IoT, we need to look at it from two different but inter-dependant perspectives:

  • What it is
  • Why it is


Watch the video version of this post here

The WHY needs to be understood for each and every IoT project. Without some sort of business return, projects will never get off the ground. However, without looking at the WHAT, it's very hard to start imagining how it can help us in the first place. So let's start there.

Most definitions of IoT start with something like "the interconnection via the Internet of devices (things), enabling them to send and receive data." This is not the whole picture. We need to include in our definition what can be done with this new data through analytics or data visualisation. We can potentially discover new business insights to help us either reduce costs, provide new services or revenue streams or to improve our competitive edge.

This all sounds a bit fluffy until you apply it to an example:

  • Car insurance is highly competitive. Premiums are affected by a driver's history with factors like length of driving experience, no claims periods and previous accidents. IoT has enabled insurers to modify premiums based on actual driving behaviours. The addition of a "black box" to the insured car enables tracking of acceleration, speed and position. These devices are connected to the internet via mobile network. Not only can the insurer monitor individual drivers, they can also use data collected across all their customers to start building a much richer picture of risks. For example, identifying high risk areas for collisions. This data could potentially be shared in anonymous form to external parties for a fee. Examples could be the police who may have an interest in areas where speeding is rife, or the Highways Agency interested in accident blackspots. With a better understanding of risks, the insurer can potentially undercut their competition or devise new policies which are tailored for specific demographics.

There are some technology trends which are combining to enable organisations to realise value from IoT projects which would not have been feasible in the past. 

These are why IoT has become such a hot topic.
  • Sensors can be wirelessly connected to the internet within small battery powered modules which can last for years without replacement.
  • Mobile carriers are using IP based protocols and developing low cost narrowband connections. These support use cases which are becoming commercially feasible for the first time.
  • Real-time and historical analytics capability has become affordable and readily available to a wide audience and can be consumed in multiple ways including on-premise or cloud.
  • A number of vendors are productising these solutions and making it easier and quicker for customers to leverage the Internet of Things.
The two main challenges for anyone wanting to cash in on IoT are directly related to our two perspectives:
  • WHAT is it
    Projects span a huge range of technologies from "things" through to data analytics. It will be hard for a singe supplier to provide expertise and services capability across the whole spectrum. A partnership approach will often be the only workable option.
  • WHY is it
    Even with expertise across the IoT spectrum, projects will never fly without a strong business case. Suppliers will need to couple their IoT skills with a deep understanding of their customer's business and engage using a consultative approach.
Finally, IoT projects can be bespoke and costly. The secret to success will be the identification of verticals where solutions can be developed with a wider market appeal.



Tuesday, 25 October 2016

Exploring hyper-converged without the hype

Hyper-converged solutions from a number of vendors (new and old) have grabbed the headlines and become popular with organisations which need a simple, small footprint package including hypervisor, compute, networking and storage which can scale-out to meet their needs. This architecture is being touted as the "software defined" future of the data centre which can eliminate IT silos and fit into any size of organisation.
 
By using software to both glue the components together and to provide a simple management interface, these systems can be deployed quickly and are space efficient with a low management overhead.


Hyper-converged infrastructure (or HCI) also has the advantage that all the hardware components are housed in a standard X86 server chassis (often with more than one server "node" per box) which makes it high density and low cost.

 


Watch this short video to see how HCI compares to traditional and converged solutions.

 

 

 
 
 However, there are a few limitations to watch out for:



  1. You cannot grow storage capacity without investing in compute too (which you may not need). 
  2. Performance will be restricted by the network connecting the system nodes – the more you add, the greater the inefficiencies.
  3. The virtual San within the solution can only be accessed by virtual machines. So any non-virtualised database applications may not be able to use the storage. Note some solutions may allow an iSCSI connection to the virtual SAN, but may not provide the low latency and performance needed.
  4. Although it uses relatively cheap hardware, the software to run the solution is not. Experience shows that in larger systems, traditional or converged solutions may be lower cost.
  5. Although designed to eliminate silos by providing a single scalable infrastructure, hyper converged can become its own silo when it cannot meet the requirements for all applications.
  6. Database applications are usually licenced by the processor cores that they "touch". In a hyper-converged solution, this could be the entire cluster meaning high cost or breaches of licence agreements as nodes are added. 
Some other considerations:
  • It's worth checking out a number of vendors as there are differences between them and specifications are also evolving very fast.
  • Consider whether you want an appliance type solution or a software only approach (meaning you can build your own solution).
  • Think about your hypervisor choice and how that fits with the rest of your estate. You will probably want to be able to migrate virtual machines across datacentres and maybe out to the cloud. So don't let the hypervisor choice stand in the way of this.
So in summary, hyper-converged or HCI can bring huge benefits in terms of flexibility, operational costs and the speed of deploying new applications, however there are some limitations which need to be understood. In other words, go in with your eyes wide open.
 
 

Thursday, 19 December 2013

Object Storage - unsung hero of Cloud and Big Data



They say that information is power, but in a world where the majority of information is digitised and stored electronically, it can only be realised if the information can be found and within the context of what you need to know. Digital information needs to be contextual and retrievable. 

This has long been solved in the case of structured data which resides within databases. These provide relational context and indexes for search and retrieval. But what about unstructured file data which is where the vast majority of information now lives? The current term for this challenge is Big Data. 


One of the key technologies which will help with this challenge is little known and does not come with a fancy title – it is called object storage.


To best explain what object storage is we need to start with the ones and zeros of data, then work our way towards the information that we intend to work with.


Hard drives are at the heart of most storage systems today. Data is retained in blocks of ones and zeros. SAN storage systems expose these blocks to the applications running on connected servers. This can be highly efficient, for example with databases which require fast granular access to the data.


Users however, typically need to work with files like spreadsheets, Word documents, slide-decks, images, emails etc. Each file comprises of a sequenced number of blocks of data which together combine to make the file. Furthermore, the file must reside in a file system which provides a nested folder structure so its location can be indexed. This functionality is provided by an operating system which sits logically above the hard drives. This can be integrated into the storage system (making it a NAS device) or can be external in the form of a server (File Server) or dedicated appliance (NAS gateway or header).


SAN and NAS respectively provide block and file access to data and this has historically catered for most organisations’ needs. However, there are some shortcomings. The fastest growing form of data is unstructured or file data. Files are getting bigger, more numerous and rarely get deleted. As file systems get larger, so they slow down and in fact have hard limits to their scalability. They are also restrictive in the way that information can be searched due to the non-contextual nature of the nested folder structure (which is determined largely by individual users). This system works OK at an individual level because we all apply some logic to how we organise our files. However, at an organisational level, this logic is unknown. So for instance, finding all files which contain “confidential information” relating to a particular customer, might be nigh on impossible. 


With object storage, we have the ability to build context and structure right into the file itself. To achieve this, we wrap the file with “metadata”. This is information about the file. The combination of data and metadata is called an object.


This sounds simple, but the implications are enormous.


Scalability and Performance. Object stores remove the nested folder structure which is the barrier to very large datasets. Information is found instead by searching through the metadata for what you need. It is similar to the way search engines find information on the internet. If you like your files structured by date, by customer, by whatever – not a problem. This can be added into the metadata. You can even simulate a whole nested folder system by including the path in the metadata for each object.


Big Data. Because objects have contextual information included, organisations can search across datasets and extract just the information they need. A great example would be in a hospital environment. It is quite possible to store patient records such that personal identifiable information is retained in metadata and access to this restricted. Analytics could be run across patient data with total anonymity retained.


Cloud. Massive scale plus the ability to apply security policies to data based on fields within the metadata makes object storage a great solution for service providers who store data on behalf of customers. Storage can be carved up into virtual containers based on who owns the data (multi-tenancy). This is exactly what the likes of Amazon, Google and many others are basing their businesses on.


Archive. One of the key uses of metadata is to incorporate a checksum. This essential piece of information tells the system if the object is valid. In other words, the system will know if the object has been changed by a user, corrupted or even deleted. In fact, object stores are typically set up as WORM  “write once read many”. This means that objects are not actually changed, rather a new version is created. This provides the ability to roll back to previous versions or prior to deletion. 


The combination of checksums and WORM functionality make object stores ideal for long term archival of data.


Firstly, by automatically keeping a copy of each object, the system can be made self-healing. If an object is corrupted or lost, the system knows and can recreate the original from the copy. Object stores are mostly designed with a scale-out architecture. If the store is spread across two geographic locations with a copy in each, then arguably the data no longer needs backing up to tape. This in itself can dramatically reduce operational costs.


Secondly, each object carries its own integrity, meaning that objects can be migrated from one store to another whilst retaining proof of its original content. This also means that the underlying hardware can be upgraded as new technology is developed and the objects remain intact indefinitely (with full chain of custody). Now, the data can outlive the infrastructure and the applications which created it.


Thirdly, the metadata can include information pertaining to confidentiality and retention period. Object stores can act upon these details making them ideal for enforcing legal compliance and governance policies.


So you may ask, why are object stores not taking over the world of storage?


Well in one respect, they already are. Many cloud providers have built infrastructures on home grown object storage technology. This has given them leading edge advantages over traditional solutions which is a strong reason why they have not been advertising these facts too loudly. Vast quantities of data are sitting in object stores. IDC forecast that worldwide revenue for file-based and object-based storage will reach $38 billion by 2017, a huge jump from the market's estimated $23-billion-plus revenue in 2013.


As organisations increasingly look to the benefits of cloud type infrastructure, whether public (outsourced) or private (in-house) or hybrid (combination), so object storage will become a key consideration. Traditional file systems will become restrictive and impact business, but it’s a matter of scale. The cloud service providers are leading the charge, but many others may well follow. 


Object storage does require change in terms of technology and significantly in terms of processes. These may be reasons for slower adoption. Another concern might be that object stores have not yet become standards based meaning there is potential for vendor lock-in. Some solutions are more open than others.


Is object storage right for you?


This will of course depend on your situation. If any of the following apply to you, then object storage could be a consideration:


       Massive growth in unstructured data which is straining traditional storage systems.

       Long term retention or specified retention term for data especially where chain of custody is a requirement (data compliance and governance).

       Archive economics – when the savings associated with archiving data way from primary storage and out of the tape backup cycle can outweigh the costs of implementing an active archive with object storage, creating a business case for change.

       Big Data – when silos of data, by application or physical storage, are impeding the ability to find and retrieve information for analysis or decision support.

       Cloud services. When you need to provide a multi-tenanted data service to your customers – whether internal or external to your organisation.


Object storage will not replace traditional SAN and NAS solutions, these have some advantages of their own for many applications. However, as unstructured data continues to grow, object storage will become a complementary and commonplace addition to storage infrastructures everywhere.

Unclear about SAN and NAS - check out www.2decipher.com

Tuesday, 10 September 2013

Fancy a career in IT Sales?


We all know loosely what  Information Technology is, because it touches all of us on a daily basis. It is a term which has been around since 1958 when it appeared in the Harvard Business Review and has since been more formally defined by  the Information Technology Association of America as  "the study, design, development, application, implementation, support or management of computer-based information systems". The fact that the term was coined in the Harvard Business Review gives us a clue as to its significance. It is not so much what it is as what it does that is of importance. 

Businesses have come to rely on Information Technology to improve  productivity (for example for automating production processes in manufacturers, for speeding up money transfers in banks and improving collaboration between workers in all types of businesses).  More recently, new businesses have evolved which could not exist without information technology (for example Amazon which started as an online only book store, Google which started as a search engine and Facebook which is at the center of the social media revolution).  The benefits that IT brings to the commercial world equally apply to the public sector and indeed to individuals. Now IT touches all of us during our working day as well as our personal time and as applications become  more mobile, more and more of our waking day is affected. 

The upshot of all of this is that selling IT is big business and if entered into as a career, is something that could last a lifetime - if you have the capacity and motivation to keep up with the constant change. IT represents a significant amount of spend (and therefore sales opportunity). Analyst firm Gartner reckons global spend on IT in 2013 will reach 3.8 trillion US Dollars (a growth of 4% on 2012 despite severely poor economic conditions). This figure is pretty meaningless on its own so let's compare with the International Monetary Fund's estimation for global GDP (Gross Domestic Product) which is circa 75 trillion US Dollars for 2013. This implies that about 5% of the world's available spend goes into IT. This is quite consistent with the levels of spend that organisations spend on IT as a percentage of their revenue. Of course, this can vary quite widely depending on the size of the organisation and the industry within which it operates. It should be noted that this spend is not just based on IT products purchased. It includes costs for services, support and maintenance contracts and some IT staff costs too. 

As any sales coach will tell you, one of the keys to winning a sale is understanding how purchasing from you will support the buyer by adding value either to his business or to himself personally. In other words - why customers buy IT.  In IT sales, gaining an understanding of how technology relates to business will help you become more successful. You also need to know who to speak to within your customer's organisation to win the deal.

You may be interested to know that many successful IT sales people are not technical by nature. Let me emphasise this point.
 
You don't need to be technical to be successful in selling technology.
 
I can hear many of you breathing a sigh of relief. Here is the catch though, you do need to  build trust and be able to influence buyers who are technical experts. IT conversations can become complex quickly so a grounding in what you need to know about technology can be invaluable.
 
A career in IT Sales can be highly rewarding in many ways - not least financially. There are many sales techniques that will help you on your way, but unless you can start talking the language of your customers, you may find yourself disadvantaged. So my advice is to spend a little time each week devoted to learning what IT is being used for and how it can help your customers achieve their goals.

To get started - check out www.2decipher.com

 

Tuesday, 23 July 2013

What is a server?

Servers are computers which typically support multiple users. As with all computers, they contain processors, memory, networking and data storage capabilities. Servers come in various shapes and sizes. This bit you probably already knew, however there are three major categories of server and you may have wondered what the difference is between say a "mainframe" and a "unix server". And how are these different from standard servers. Read on to find out more.


1 Mainframe (proprietary)
Often dedicated for specific, enterprise wide tasks, mainframes are the grand-daddy of servers. Based on a monolithic architecture, mainframes consist of hardware and software which is often provided by a single vendor - or perhaps by two vendors who are almost entirely reliant on each other. The lock-in between hardware and software is the very reason why mainframes still exist today and less costly alternatives have not ousted them. The sheer cost of developing alternatives and the costs for customers to change away from systems which are tightly embedded in their business processes - means they are here for some time yet. Mainframes today represent a very small part of the total server market. They are categorised as "proprietary" because of the strong tie in between software and hardware.


2 Unix (open systems)
In order to break the stranglehold of mainframes and to lower the cost of computing - Unix based systems were born back in the 1970s. Unix refers to the open standard operating system (software) that runs on these computers. Originally developed by AT&T then enhanced by many University under-graduates, the system broke the link between hardware and software, increasing competitiveness and reducing costs. However, as Unix evolved, new companies were set up to commercialise the new systems and provide business solutions based on combinations of hardware and software. This process led to proprietary versions of Unix evolving. Today, Unix systems represent well under half of the server market in terms of revenue but are seen as costly compared to the third category of server which evolved from the first PCs in the 1980s. Many organisations continue to migrate away from Unix but interestingly, there is a trend to incorporate some Unix like technologies into industry standard servers (for example - server virtualisation is derived from "partitioning" technology). 


3 Industry Standard (open systems)
The first PC was produced by IBM in 1981. Consisting of an Intel processor and chipset and storage controlled by a disk operating system called MS DOS (how original). The MS stood for Microsoft - Bill Gates' most significant software release ever. PC stands for Personal Computer - the exact opposite of a mainframe in that for the first time, users could have their own personal computing environment on their desk. IBM made the decision to open up the fundamental design of the PC so other manufacturers would help establish it as the industry standard.

As PC's evolved, it quickly became apparent that connecting them together could be useful. Data could now be shared. The natural progression of this was to have perhaps one PC with a big hard drive - to hold data for others to share. And so the first industry standard server was born. Incidentally, there are some versions of Unix which can run on industry standard servers which confuses things slightly, and there is a fully open operating system called Linux too.

Visit 2DeCipher.com to find out more.