A TEXTUAL CONTENTS PROCESSING SOFTWARE TOOL ENABLING WRITERS TO ACHIEVE BETTER SEARCH ENGINE OPTIMIZATION RESULTS AT SCALE
Using AWS and cloud machine learning services to craft a software application enabling SEO solutions provider to deliver search engine optimization results at lower costs and scale their SEO service business.
We developed the solution for US-based Software Innovation Labs. The client serviced on this particular project was a firm supporting brands in gaining visibility in search results as well as increasing targeted traffic and conversion rates through organic search. The company innovate and provide enterprise SEO solutions based on deep learning algorithms as well as offer their expertise to craft effective SEO strategies.
The client was after designing and implementing a SEO editing software that would enable content creators to automate their SEO optimization work performed while producing texts. The writers were to gain access to a software tool which would support them in the process of generating and modifying texts created for search engine optimization purposes. With the solution in place, the client would gain a position to deliver their services faster, at a lower cost and at greater scale, i.e., to a significantly larger number of clients served.
To help the client meet the challenge we got engaged to design and implement a software application meant to automate the process of text search engine optimization. More specifically, the tool allows to retrieve website textual contents, analyses them and supports the content creator to conduct SEO-geared modifications. The writers can thus polish their texts for google search results in a semi-automatic way.
In terms of technology, the application was developed in GoLang and Java, and was based on a suite of external services. The tool collects textual data from analysed websites and processes the texts using microservices programmed in GoLang and Java. The microservices in turn leverage some external services such as AWS (including AWS Lambda and EC2) and other cloud machine learning services (IBM Watson) to optimize the quality of the processed text in terms of targeted SEO results.
The REST data processing pipelines implemented in the solution collect and analyse textual website data and make it much more convenient for content creators to detect what keywords and keyword phrases should be used to achieve better SEO scoring for the modified website contents. The data pipelines use machine learning algorithms for text decomposition and analysis. What is more, the tool is also fitted up with DynamoDB - a database which scales automatically depending on the traffic loads processed, which allows for flexible cost optimization, i.e., the costs are dependent on and reflect the loads processed by the client. The app has also been equipped with a pluggable authentication provider, which enables the client to fairly easily choose and/or replace their user database provider.All in all, website textual contents are fetched by the tool and fed into the application data pipeline, where they are processed and optimized with the support of external services; the modifications are automatically produced for the writer working on the text, who can then accept or reject the changes suggested.
- text processing application programmed in JAVA and GoLang
- software architecture based on external services: AWS and IBM Watson
- AWS auto scaling for handling load spikes
- pluggable authentication provider
- automatic high-speed textual data processing for SEO results
- high availability AWS-based architecture ensuring quality SLA
- infrastructure for SEO service delivery at scale
- reduced AWS-related costs