Without paying much attention to them, search engines quietly go about their business revolutionising how we live our lives. Whether we want to know the circumference of Saturn or what number bus will get us to the local train station, search engines provide trillions of pieces of information simultaneously to people across the globe in a matter of seconds. These incredible tools are brushed off as an innate aspect of modern living, but if we take a step back for a moment to realise what these powerful servers accomplish, it is pretty incredible.
Search engines (maybe not in the way we imagine them today) have been around since the dawn of the internet. In its incubation, the internet was composed of file transfer protocol sites (FTPs) which users could sift through to find specific community files and information. As the streams of web servers joining the internet grew, there had to become a way of organising this chaos that was efficient and relevant.
The First Search Engine
The first search engine which attempted to organise the web, Archie, was conceived by computer scientist Alan Emtage in 1989 at McGill University and launched a year later. Created as a means for the universities computer science department to connect to the web, Archie was essentially an index of FTP files. It allowed users to look around the web and make simple requests for searching files. Unlike modern search engines, Archie didn't recognise natural language or index the contents within the files. Instead, users had to know the file's title or select their search string meticulously to find what they were looking for. It wasn't until mid-1991 when Gopher was created that content inside a file could be indexed.
Before the WWW
Gopher was a client/server directory system created by a group of programmers from the University of Minnesota. They had been tasked with formulating a university-wide information system that would allow users to browse resources on the internet. The Gopher software became the first unified system to bring all resources on the internet together. It did so in a user-friendly capacity that didn't require data input from a centralised database. At this point in the internets evolution, it was a piece of technology primarily utilised by academics and government institutions. As a result, it thrived off being structured rather than for commercial use.
As Gopher grew in popularity and demand, it became too difficult to maintain on a free-to-use basis, and the university began charging licensing fees to its users, foreshadowing the software's demise.
The Rise of the World Wide Web
When Tim Berners-Lee launched the WWW in 1991, it went down like a led balloon and way into 1993 Gopher was still deemed the most established and user-friendly web server. What really signified the sudden shift in software popularity was NCSA releasing Mosaic alongside the widespread adoption of Windows onto consumer PCs. Gopher naively thought the internet would remain a digital library for those wanting to undertake research, but we all know money comes first in the capitalist world. When businesses realised the internet could be adapted for advertisement, Gopher was rendered unimportant, and servers that could support multimedia functions, like the WWW and Mosaic, prevailed.
The Modern Search Engine World
Berners-Lee couldn't have predicted what the internet would become, and if he did, it's unlikely he would have been so keen to create it. Suppose we can ignore mass consumerism, data privacy breaches and Facebook for a moment and focus solely on the search engines of the modern world; we can appreciate the true intent of the internet and how far it has come in the last 30 or so years.
Gopher and its descendent human-powered directories still exist, but they are essentially there for nostalgia rather than actual web searching. Instead, we have learnt to power the web with artificial intelligence. Crawler based search engines such as Bing, Yahoo!, Baidu and everyone's favourite, Google, now use a bot to index new content onto the search database.
Building on the foundations laid by Gopher, crawler search engines search web pages for keywords and then assigns them to a page. They then use an algorithm to calculate the relevancy of the web pages in their database to that of the users search string.
Each search engine has a slightly different process; some place more weight on keyword density, and others look for links and meta tags. Hence, searching on Google will bring up different results to searching on Bing or any other search engine. These crawler engines have become so advanced that they change their algorithms regularly. This means unless you update your web pages frequently, you won't remain at the top of the search results. Doing so makes sure that users get the most up-to-date information they are looking for.
Google: The Mother of All Search Engines
In the modern world, Google has become the undisputed leader in the search engine world, with "Google it" becoming the go-to phrase for searching the web. Google alone processes approximately 63,000 searches a second, that's 5.6 billion searches a day. That's not even mentioning Google's other software like Google Docs, Maps and Shopping. Google has truly saturated the search engine market, but how have they done it?
- The Master of Acquisitions
Google's unrivalled success has been its mastery in acquiring the right companies at the right time. From their humble beginnings until now, there are over 230 companies under the Google brand. Each acquisition has allowed them to spread their neverending branches further afield from the travel industry to entertainment; they have left no rock unturned. Google's sheer arrogance and lack of remorse have allowed them to reach superhuman heights on the internet.
- Monopolising the Travel Industry
In 2010 Google bought the innovative ITA software, a tool for finding up-to-date flight deals on the internet. In doing so, they acquired the software many of their rivals relied on and used their monopoly to develop Google Flights.
- All Routes Lead to Google
Flash forward to 2013, Google bought Waze, a burgeoning rival in the maps search industry. Not only did Google quash any competition, but it gained access to a new collection of users location data which allowed them to target ads more accurately.
- The Ruler of Mobile Devices
Google purchased Android way back in 2005 when it was just a small start-up. Now, Android runs on 7/10 of the mobile devices on earth. All Android devices come preloaded with Google Maps and Google as the default search engine as part of their licensing agreement. This has allowed them to acquire users location data built-in with their purchase and establish themselves as the default search engine on the majority of the world's devices.
In the same year, Google signed a deal with Apple to be the default browser on Safari. At the time, Apple only made Mac devices. Two years later, when they released the iPhone, the deal still stood and look where we are today.
Where do we go next?
It's fair to say Google have some pretty advanced AI technology in the works, no doubt on its way to make our lives painstakingly easier. But at what cost is this lightning-quick information reaching us? When we interact with any Google product, we relinquish some of our privacy. All Google search engine activity can be traced back to your Google account, meaning they can track you from multiple angles, target you with ads and generally know what you're up to at all times; sounds great, right?
There are options (good options!) to make your default browser that isn't google that will protect your privacy. Founded in 2009, Ecosia is a completely free browser that plants trees with the revenue it makes from your searches. They have already planted over 69 million trees, supported coffee farmers in Columbia, and protected Orangutans habits that are being destroyed for palm oil plantations.
The future may look like a Google verse, but we have the power to change it.