Category Archives: Blog

Overcoming the Challenges of Front-end Technology Implementation

Overcoming the Challenges of Front-end Technology Implementation

Front-end technologies are the magical set of tools or platforms used to develop the user interface (UI) of web applications and web pages. What users view, access, and interact with directly, all counts as front-end technology.

React, Angular and Vue have been the most community-endorsed tools in front-end technology for several years. In particular, React is the most popular front-end framework. React has had the highest usage ranking since 2016, hitting 80% in 2020. with Angular (56%) and Vue (49%) following two steps behind. React’s community endorsement has been constant over the years, remaining in the top three positions when it comes to satisfaction, interest, usage, and awareness ratio rankings since 2016.

Although these three powerhouses (React, Angular, and Vue) have led the pack so far, modern front-end technologies evolve rapidly, with new solutions being introduced to the market daily. Strong contenders, such as Svelte, are constantly emerging and the diversity of choices can cause confusion.

With this in mind, what factors should be considered before implementing a modern front-end technology? Furthermore, what challenges await post-implementation and how did we, at Expert Network, overcome them? What advice do we have for developers that want to work with modern front-end technologies?

Key factors to consider

Base the decision to integrate modern front-end technologies, such as React, into your codebase on a technical and business analysis. Weigh up the costs and risks against the benefits it provides, and be sure that the latter exceeds the former.

Here are some key factors to consider as you measure your scales:

  • amount of time that needs to be invested
  • the lifespan of a project (and its maintenance)
  • the scope of a project (which examines elements such as who the end-user is and whether it’s a short or long-term project)
  • team capability

Depending on the end-user, some projects will not benefit from a snappier UI or any page reloads. On the other hand, modern front-end technologies will help in the long run for a lengthy development or maintenance timeframe. Although integrating front-end technologies requires an investment of time, newer technologies are easier to maintain and integrate, making them ideal for long-term projects.

Challenges in implementing front-end technology

At Expert Network, we use React as part of our front-end stack. We chose to use React as it has the best balance between developer satisfaction and community support, as well as the best third party libraries. Despite its many benefits, we’ve had to overcome several challenges whilst implementing React: making sure our teams have the support they need, using the right tools and best practices, and providing standardized ways of integrating React with existing large codebases.

1. Getting teams ready – providing company-wide support
One of the challenges of integrating modern technology, such as React, into an existing project is ensuring that teams are comfortable using the technology. The solution is to have sources that promote and share best practices.

At Expert Network, we hold extensive training sessions and workshops to ensure that our teams get the support they need to deliver quality code using React. We also have people that encourage other team members to use best practices and conduct knowledge-sharing sessions. Moreover, our front-end discipline provides help and consultancies whenever required.

2. Choosing the right tools from millions
Another challenge is choosing the right tool for the job. The front-end ecosystem is incredibly diverse, with 1.3 million packages on NPM, and it is becoming increasingly difficult to sift out the ones with good support and planned future updates.

To counter this, we keep a list of curated tried-and-tested packages based on the community support a tool enjoys. This list helps our developers to choose the right tool to tackle a problem. In another sense, it also helps us to standardize our workflow across multiple projects.

3. Integrating with large existing codebases
Not every project is new. When using modern front-end technologies, they may need to be integrated with an existing codebase. Depending on how the specific project was designed, this could be an easy or difficult task.

To help our teams, we provide guidance and support with planning and integration. We also constantly share our experiences with different integrations on various projects so that individuals can use this knowledge to choose the best solution. As an example, we hold internal meetings, named DevOps Forum. DevOps Forums are internal meetings with our group of developers where we exchange ideas, learn from each other, and come up with the finest strategies and approaches to implement into our projects. It’s a hands-on approach that tackles the task in question directly and gives the team a space to get the advice and support they need.

At Expert Network, we constantly aim to standardize and streamline processes because it helps our developers, our customers, and our company. In this sense, we are working on a series of guidelines, training, and an internal audit process to find timely solutions to issues that may arise.

Advice to keep in mind

Our experience in integrating modern front-end technology has been a true learning curve. For other developers at the cusp of a similar journey, here is some advice we’d like to share based on our experience:

  • Improve your project one module at a time. Integrating with large codebases does not have to be done all at once.
  • Use tools that are supported by the community. This will help you to reduce the number of bugs and will make it easier to find solutions for issues that arise along the way.
  • Your team is the key to the success of a project. It’s essential to identify team members that have an affinity for front-end technologies and are willing to promote or share their knowledge with the rest.
  • Several frameworks have been tried and tested, proving themselves effective for projects of any size and difficulty. Therefore, using community-endorsed tools like React, Angular or Vue will provide the best results in projects as you’ll be able to easily find the answers you are looking for.

Share this article on

Join us

Reach your full potential and value. Visit our Careers page.

A Guide to DevOps Security

A Guide to Security in DevOps

DevOps successfully unifies two traditionally separate aspects of the IT world: software development (Dev) and IT operations (Ops). By combining the people, processes, and technology of these entities, DevOps accomplishes its goal of shortening the systems development life cycle. The DevOps approach utilizes the Agile methodology to integrate and streamline the development and operations processes. This enables a faster and more efficient development process that provides continuous delivery of value and high-quality software.

As the spotlight focuses on speed, automation, and other DevOps tenets, security often becomes an afterthought and its negligence is a common shortcoming. Effective DevOps ensures rapid and frequent development cycles, but outdated traditional security practices can’t seem to keep up. Although the fast pace that DevOps promotes is part of what makes it so desirable, this velocity is simultaneously a downside as it can lead to additional security risks.

The solution is to integrate security protocols and practices across the entire DevOps pipeline and life cycle. DevOps security, or DevSecOps, is the application of information security (InfoSec) policy and technology to the entire DevOps lifecycle and value stream.

What security challenges have emerged from our DevOps initiatives? What practices have we implemented to overcome security gaps or challenges without hindering the benefits of DevOps practices?

Challenges in DevOps Security

The shift from monolithic applications to agile DevOps environments presents new risks and changes that traditional security solutions and practices cannot address. What new security challenges have emerged as a result of the modern, agile focus of DevOps?

  • The fast-moving development process and environment can lead to security concerns caused by undetected bugs or errors. If security cannot move at the same speed as DevOps, it can lead to unintentional vulnerabilities, insecure code, and other weaknesses that contribute to operational dysfunction.
  • The DevOps model is based on collaboration between different teams but these teams may have different processes, which can lead to gaps in security protocols. The highly interconnected and cohesive nature of collaboration in DevOps teams may also require unrestricted access to privileged accounts or the sharing of access keys, API tokens, certificates, etc. In turn, this opens dangerous backdoors and provides opportunities for malicious actors to attack, steal data, and disrupt operations.
  • A large majority of DevOps environments rely on the extended usage of cloud (serverless) computing. To ensure complete support, the cloud provider needs to fulfill security requirements in compliance with an organization’s security processes and policies.
  • Integrating security into CI/CD pipelines by including automated security testing to address vulnerabilities early and eliminate inefficiencies.

Best Practices for DevOps Security

The DevOps ethos has brought on a transformation and changed the way security needs to be achieved. Since DevOps involves every stage of the software development life cycle, effective security is more critical than ever. What principles and guidelines have we integrated to help deal with security challenges in a DevOps environment?

  1. Transition to a DevSecOps team. DevSecOps is an approach that brings together software development (Dev), security (Sec), and IT operations (Ops) to integrate security into the entire DevOps pipeline and life cycle. It’s essential for the DevOps team to take ownership of addressing security (rather than relying on an external provider) and for security to be included as early as possible in the development life cycle.
  2. The development team must adopt secure coding practices. For example, the OWASP Secure Coding Checklist is a comprehensive reference.
  3. Use tools for static code analysis and to highlight security flaws.
  4. Designate a member of the team to be responsible for security. Also make sure to use documented security procedures and policies that are easy for developers and other team members to comprehend.
  5. Use separate environments for developing, testing, and accessing production environments whenever possible.
  6. Make security a main concern in all testing phases and strategies.
  7. Follow a structured security testing methodology.
  8. Automate security testing as much as possible to scale security to DevOps processes. Security automation also minimizes the risk of human error and manual intervention.
  9. Use best practices in regards to credentials management to securely store and manage them.
  10. Run periodic manual and automated penetration testing on the production environments.
  11. Use containers whenever possible. Containers enable DevSecOps as they provide security and make rapid, repeatable application development and deployment cycles.
  12. Hack yourself. Assess your infrastructure and code from an attacker’s viewpoint to enable a better understanding of the security weaknesses and strengths of an application, service, data center, and cloud platform.
  13. Use automatic backups for critical information resources.
  14. Prepare a disaster recovery plan and test it periodically.

Despite the many benefits of DevOps, it presents new risks and security challenges that traditional solutions are unable to address. Although DevOps fuses development and operations processes, DevOps and security often still remain largely separate. To overcome this, DevOps security or DevSecOps aims to integrate security into all phases of the software development life cycle. From planning, developing, and testing to release, deployment, maintenance, and beyond.

Introducing DevOps security early in the life cycle enables a productive DevOps ecosystem. It helps to identify vulnerabilities and operational weaknesses long before they become an issue. By implementing the above DevOps security best practices, organizations will be empowered with the ability to reduce data breaches and to continuously deliver high-quality software at velocity, securely.

Share this article on

Join us

Reach your full potential and value. Visit our Careers page.

Advancing in DevOps with Cypress and Percy

Advancing in DevOps with Cypress and Percy

In general, emerging development technologies in the software industry require more attention from those who contribute to the creation of products. How can we deliver qualitative products that meet customer needs? By staying on our toes, constantly seeking new solutions that can be applied to projects.

As the web evolves, testing needs to evolve along with it. In the QA discipline, there’s been a persistent growth in the need to automate manual testing. Integrating a tool that could accelerate such automation in the DevOps process is necessary to fuel a fast, qualitative and reliable delivery. At Expert Network, we recognized that the best-fit tool for us would need to be developer-friendly, powerful, and as open-source as possible. Its scalability for all browsers and resolutions was also a criterion we needed to consider.

The solution we chose was Cypress, a new automation tool that met all these qualities and has already been implemented in several projects since.

The benefits of Cypress

The need to deliver correctly, on time, and as easily as possible led to the question: “Which automation tool is best for us?” To find our answer, we compared the particularities of Selenium and Cypress. Selenium is a famous and widely used automation tool, whilst Cypress is a new tool with a JavaScript based testing framework built for the modern web. After comparing the tools and balancing their differences, we found that the benefits of Cypress outweighed Selenium.

What are the benefits of Cypress?

  • The speed of test writing was a decisive criterion in this “dispute”. The development of tests in Cypress is much faster due to its programming language (namely JavaScript), but also much easier since it requires minimal knowledge of JavaScript.
  • Easier and quicker debugging in case of failure. The location and cause of errors are clearly presented, and Cypress would even offer a solution at times. This reduces and eliminates the amount of time spent looking for errors and solutions, which in turn meant that maintenance became faster.
  • The real-time reload. After each code change, Cypress gives itself a real-time reload, running the new freshly modified automation test. This helps to streamline the test development process so you can quickly view the results of new changes.
  • Cypress supports many browsers, which makes testing on different types of environments more accurate.
  • Consistent and well-structured testing results make it easy to create reports based on them. Furthermore, Cypress’s screenshot and video feature help to accurately diagnose and cover a problem encountered in the application.
  • Cypress has great integration with Percy, a visual test tool.

Alongside Cypress, Percy is the newest way to design, develop and deliver software with confidence. Percy is a testing tool that provides a complete visual overview: rendering, comparing and reviewing visual changes to catch bugs. With Percy, we could get visual coverage across our entire UI. We chose Percy because it has a scalable infrastructure optimized to be fast. It’s secure, reliable, and allows our team to work collaboratively. It adapts easily to browsers or changes in resolution and finds even the smallest changes of components in the application.

Bumps along the road to Cypress

The main challenge we encountered in our switch from Selenium to Cypress was the transition to a new programming language. Although Cypress’s JavaScript language is easy to learn, there are considerable differences compared to Selenium’s C# language. This difference in language had the potential to create conflicts of knowledge when developing the automation suite.

Another challenge was the complex task of implementing object-oriented programming (OOP) for a suite that would be organized and easy to understand. We needed to create a space that was very well-structured, which is more difficult than it sounds in a sentence. It also needed to facilitate the maintenance and reuse of certain tests, functions and commands.

The last bump in our road to Cypress was creating the right structure for test results, as well as moving or rewriting existing tests in Cypress. The test results, in the form of screenshots and videos, had to be easy to find so they could be accessed by everyone. Our saving solution was the formation of a Page Object type of structure for an orderly and well-systemized development.

What kind of results did Cypress provide?

By implementing Cypress, we experienced the possibility to improve metrics. It takes less time to write and run tests due to the Cypress interface, the complexity of regression decreased considerably, and releases are no longer as painful. The automation process also starts from the early stages of user stories, helping us to successfully catch bugs from the first stages of development.

Other advantageous results include how the differences in application between browsers are easy to identify and the search for errors in debugging is done by Cypress. The consistency of Cypress’s testing results, as well as the photo-video evidence preservation of a bug, also makes the creation of an automation running report extensive and accurate.

It was observed that many of our colleagues were very open to the transition from Selenium to Cypress. They were quick to notice the advantages that came with this tool and keen to learn more about it.

Additionally, Cypress (along with Percy) provides insight into every product change, which was exactly what we needed. Now we see constant progress at project level and there’s nothing we enjoy more.

Never stop improving, this is how we DevOps

It’s undeniable that the software industry goes through continuous upgrades. A more efficient new tool or design pattern could appear at any time. As such, we put a lot of focus on remaining up-to-date and willing to adapt to new requirements or opportunities in the market.

At the moment, our plans are to improve the structure of the automation project. This will optimize the development and running of test suites from a spatial and temporal point of view. Additionally, the total elimination of hardcodes or workarounds from the project is in a continuous process of development. This particular improvement is a priority for the creation of a reliable, usable and efficient project.

Another target we’re aiming to reach (as soon as possible) is the full integration of Percy in the project to automate the validation of its design. Finally, we plan to adapt our developers to Cypress-specific coding best practices. Accomplishing this would aid both the project and the QA Engineer in charge of creating the tests.

We’re constantly seeking new solutions and planning for future implementations or possible improvements. As the need to automate manual testing continues to grow in the QA discipline, we’ve chosen Cypress and Percy as our solutions. They’re the best-fit automation tool for us and we already benefit from increased speed of test writing and running, easier debugging, consistent and well-structured testing results, and more. We have plans in store for future developments and aim to continuously adopt innovative technologies that benefit us and our clients. This way, we can remain competitive in this rapidly evolving industry.

Share this article on

Join us

Reach your full potential and value. Visit our Careers page.

Introducing New Technology to a Long-Running Project

Introducing New Technology to a Long-Running Project

As time elapses, technology evolves continuously, and finding ways for products to remain competitive is a necessity. This is notably evident in long-running projects that grow gradually to stay relevant and competitive by adapting to trends and implementing new technologies.

At EXN, we have worked on a long-running project with Carflow, Europe’s complete solution for marketing, sales, and stock management for auto dealers. Carflow‘s goal is to create a platform that will allow car brands, dealers and buyers to connect, share information about products, and manage their purchases. Our journey with them began in 2008 and the initiative to redesign their platform has been an ongoing mission since 2013.

As new challenges emerge, fancier and more advanced technologies are needed to adapt business flows, improve speed, and enhance UI/UX. In particular to the Carflow project, the benefits of KnockoutJS were slowly fading away as the technology got old and received less support. Meanwhile, newer and better front-end technologies were gaining traction.

Introducing ReactJS to Carflow

At that point, a substantial module in the Carflow application needed a revamp to improve business flows and UI/UX. The old module was built with ASP.NET MVC and ASP.NET Web API for the back-end and KnockoutJS to render the front-end. KnockoutJS is an open-source JavaScript library that helps developers build rich and responsive websites, but its declining benefits were an indicator that it was time for an upgrade.

In other words, it was the perfect opportunity to introduce a new technology: ReactJS. We set about rebuilding the module using ReactJS as the front-end technology while maintaining the existing back-end architecture (with a few adaptations).

Why did we choose ReactJS? Similar to KnockoutJS, ReactJS is an open-source front-end JavaScript library that helps developers to create interactive UIs. ReactJS is currently the newest advancement and hottest topic in the software industry. It has a rich ecosystem of libraries and packages, and has obtained an outstanding track record in the industry and community. It also improves the standard of web page rendering performance, which was ideal for us since one of our objectives was to improve user experience.

What are the advantages of ReactJS over KnockoutJS?

  • ReactJS’s Virtual DOM (Document Object Module) mechanism makes the process of updating real DOMs much faster and more efficient.
  • Although KnockoutJS also has components, it is easier to develop, extend, and compose components in ReactJS.
  • ReactJS has a thriving community due to its widespread adoption.
  • ReactJS comes with “one-way data binding” while KnockoutJS’s “two-way data binding” offers inferior performance and allows the introduction of unhealthy patterns. That said, two-way data binding was initially more intuitive to learn and use.
  • With ReactJS, we had multiple viable options of approaching state management, allowing us to choose the best fit for our data flows.

The challenges faced and how we overcame them

Transitioning to new technologies can be a bumpy road. These are the challenges we had to face with the introduction of ReactJS:

  • Our team had no knowledge of or expertise in ReactJS, which meant we were at the starting line of a learning curve.
  • Variety requires meticulous decisions. ReactJS has a rich ecosystem of libraries and packages, so it took time to choose the appropriate ReactJS patterns and libraries for our needs.
  • Creating the new module in ReactJS meant that we ended up with two applications: the existing ASP.NET MVC App and the new React App. The challenge was then to achieve seamless transitions to ensure that users will not notice the switch from one application to the other.

How did we approach and overcome these challenges? To solve the team’s lack of knowledge in ReactJS, we held internal training sessions during the transition. We also organized sessions with a trainer and specialist from our local community who guided us through the features and quirks of ReactJS.

To overcome the slightly overwhelming variety of libraries and packages that ReactJS offered, we collaborated with a ReactJS consultant to help us choose the most suitable stack of libraries for our needs. Of the many that were available, we ended up with:

  • React Hooks
  • Styled-components
  • MobX for state management
  • Hook Forms for form validation
  • Axios for API request management
  • Material-UI for React component styling

In order to approach the challenge of ending up with two applications (ASP.MVC and React), we employed several solutions. The first was to host the React App as a module inside the ASP.NET MVC App. We achieved this by having a script that creates a virtual directory in IIS as part of the deployment script and binding the virtual path to the React Router. Secondly, we made the two applications share the same session by using the Axios withCredentials parameter for API requests and leveraging the virtual directory approach. The third was to replicate authorization mechanisms in the React App to match the ones in the ASP.NET MVC App. Since our application is deployed as an Azure Cloud Service, a package is created during the release pipeline. We included a script in that deployment package to create the virtual directory and copy the npm build output of the React App to that location.

The drawbacks and wins

An unexpected and slightly humorous drawback of our transition to this new technology is that there’s more competition between team members as they “fight” over the stories in the new React module. Team members love taking them on and this has resulted in greater involvement in task assigning and Sprint management since everyone wants to progress in ReactJS. It is also an extra challenge now, for Product Owners and Development Leads, to keep everything in check and balanced knowledge-wise, as well as keeping everyone motivated. In comparison to if it had been developed in KnockoutJS, the redevelopment of the module with ReactJS took longer due to the learning curve we had to overcome and the refactor we needed to do down the line.

The refactor was necessary because we wanted to reduce the technical debt that usually appears when adopting a new technology if incorrect patterns are chosen. The last drawback lies in the fact that the project now uses two front-end technologies. This means that developers need to do a paradigm switch from time to time. Furthermore, new colleagues that join the project would need to learn about both technologies, which would take additional time and guidance to ensure that everyone working on the project possesses the same amount of knowledge.

A particular benefit of introducing ReactJS was that people on the team found it very motivating and stimulating to work with the latest front-end technology. As we got past the initial learning curve, we also observed increased development speed. Furthermore, feedback from our client also emphasized that the new module was much faster and more responsive than the previous ASP.NET MVC and KnockoutJS combination. The development of this module with new technology also triggered a change in the framework used for UI automated testing: we switched from Selenium to Cypress. At the end of the day, it was a big win for the team to complete this migration successfully while overcoming each hurdle that was encountered along the way.

The transition to new and more advanced technologies can be difficult, but the wins we’ve achieved with the introduction of ReactJS to the Carflow project outweigh the drawbacks and obstacles we’ve had to overcome. As of now, we already benefit from increased development speed, improved UI/UX, greater efficiency, positive feedback from our client, and a motivated team. We aim to continuously adopt innovative technologies that benefit us and our clients so we can remain competitive in this rapidly evolving industry.

Share this article on

Join us

Reach your full potential and value. Visit our Careers page.

What You Didn’t Know about Agile and DevOps

What You Didn’t Know about Agile and DevOps

As word spreads about the benefits they provide, many organizations are turning to Agile and DevOps methodologies, to really drive organizational outcomes. It’s broadly understood that the goal of Agile and DevOps is to improve and boost the productivity of a business. This is achieved by unifying the people, processes and technologies of development and operations teams in order to develop and deliver high-quality software efficiently, reliably, and cost-effectively.

Beyond the main benefits that have been widely iterated, what else is there to know about Agile and DevOps? In this article, we’ll cover recent statistics about Agile and DevOps, what is required to implement them, and difficulties teams might encounter along the way. We’ll also reveal what inspired our transition and our advice, which is based on our experience, to others who are on the cusp of taking on the challenge.

Take a look at the data

For those still a little unfamiliar with Agile and DevOps, here is a concise analysis of both concepts, their benefits, and what sets them apart. The development of Agile and DevOps has revolutionized the software development industry over the past 30 years. It has replaced traditional and time-consuming methods, with a new culture that promotes progressive and efficient approaches, automation, innovation, as well as improved communication and processes. Data from research into their implementation really exemplifies the value they can bring to those who take on these practices.

The 2020 State of Agile Report reveals that more and more organizations are realizing the value of Agile adoption, with 95% of organizations having adopted some form of Agile process. The 2018 Standish Group Chaos Study results also show that Agile projects are statistically 2X more likely to succeed and 1/3 less likely to fail than waterfall projects. In Chris James’ (CEO Scaled Agile, Inc.) welcome keynote address at the Global SAFe Summit 2020, he shared that 93% of business units that had fully adopted an agile model before the COVID-19 crisis outperformed units that hadn’t.

DevOps adoption and success rates are also soaring. According to the recent 2021 State of Database DevOps, 74% of the 3,200 enterprises surveyed have adopted DevOps in some form, a significant increase compared to the 47% in the 2016 report. With regards to success rates, 99% of respondents to the Atlassian DevOps Trends Survey 2020 said that DevOps has had a positive impact on their organization. Furthermore, those who have been practicing DevOps for longer (3+ years) are more likely to see higher-quality deliverables (66%), lower failure rate of new releases (45%), and fewer incidents (40%). The DevOps Research and Assessment (DORA) 2017 and 2019 State of DevOps Reports also reveal that high-performing DevOps teams are more agile, deploy 208X more frequently and 106 times faster than low-performing teams. The best teams recover 2.604X faster and spend 50% less time fixing security issues.

In a scene where speed, stability, and security are sought-after qualities, Agile and DevOps are the ideal choice to drive improved and elite performance.

What do you need to implement Agile?

Although Agile and DevOps share the common aim of developing and delivering end-products as quickly and efficiently as possible, they employ different approaches and functions.

The Agile methodology is a people-and results-focused approach to software development and testing that is essentially used to create applications. It is centered around adaptive planning, self-organization, and short delivery times. It is flexible, fast, and aims for continuous improvements in quality using techniques like Scrum, Kanban, and eXtreme Programming. The Agile methodology also utilizes certain main tools in its process: Agile boards, backlog management, project and issue tracking, Agile reports, and custom workflows. Some of the best Agile tools to use include ActiveCollab, Agilo for Scrum, Atlassian Jira + Agile, Pivotal Tracker, Prefix, and Retrace.

Although the list for Agile best practices could extend endlessly, here are a few important ones:

  • Communicate, whether face-to-face or through open communication tools, at every stage of the project to ensure the process stays on track even as conditions change.
  • Work fast, set priorities, and maintain small release cycles. Aim to deploy increments every few weeks and the entire piece of software in a couple of months.
  • Integrate fast feedback loops, tracking feedback on the success and speed of the development process regularly.
  • Self-organizing teams and employees are empowered, highly motivated, and generate the most value as they understand goals and create their own path to reach them.
  • Inspect practices, adjust, and adapt quickly to the process continually.
  • Use techniques like pair programming to deliver higher quality.

What do you need to implement DevOps?

DevOps is an evolution of Agile and is fundamentally utilized for application deployment. It brings together the skills, tools and processes from development and operations teams to create a culture all about automation, communication, accountability, increased collaboration and shared responsibility. DevOps standards, practices, and tools are different for every team and every company. As long as the practices and tools adopted help to deliver quality software faster, the goal is served. There is a wide array of DevOps tools available on the market for different categories: GitLab or GitHub for source code control, Ansible or Jenkins for CI/CD management, Kubernetes for container platforms and microservices, Azure or AWS for cloud computing and storage, and so much more.

Here are a few best practices to adopt when transitioning to DevOps:

  • Change the culture so that collaboration, transparency, trust, and empathy become a focus. Developers should be involved in operations, and vice versa.
  • Have a loosely-coupled architecture at both team level and technology level.
  • In addition to source code, version control everything from settings and parameters to software and hardware configuration files.
  • Use Agile methodologies for software development and infrastructure to deliver value to customers faster and with fewer headaches.
  • Automate everything that can be automated since manual tasks and processes are error-prone and unscalable.
  • Shift left with continuous integration, delivery and deployment (CI/CD) pipelines so that if anything fails, there is a fast feedback loop leading to rapid recovery.
  • Build with the right tools for each stage of the DevOps lifecycle and use the same tools and platforms in development, staging and production environments so that what works in development will transition successfully to the staging and production systems.
  • Continuously monitor the DevOps pipeline and applications so a broken build or failed test doesn’t cause unnecessary delays.

Difficulties along the way to transformation

The transition to Agile and DevOps is hard and an ongoing journey. It’s a real challenge but also delivers real value to organizations. Common issues with Agile and DevOps implementations are a lack of or limited knowledge and Agile skills, legacy infrastructure, and adjusting company culture to adequately support the Agile and DevOps philosophy.
Integrating a new perspective and culture is no easy feat since everyone, including clients, needs to be in agreement and ready to embrace a different approach and new way of working. 75% of major organizational change initiatives fail and the leading cause is a neglect of company culture. Agile and DevOps is all about acceleration and faster releases, and the only way for it to work is to encourage communication, collaboration, and break down silo mentalities. It’s also essential to make the transition accessible, easy to follow, and accompanied by appropriate guidance, support and tools. Fully embracing Agile and DevOps takes open minds, time, practice, and is a continuous job.

Our transformation and advice for others?

A couple of years ago, we began transitioning to Agile and only started integrating DevOps practices later on. Although there was plenty of talk about transitioning to DevOps practices, not everyone on our team was on board with the idea, until a particular book inspired us all. Accelerate: The Science of DevOps by Nicole Forsgren, Jez Humble and Gene Kim motivated us to begin our DevOps transformation. We made use of the practices and capabilities presented in the book to drive high performance in technology delivery and, ultimately, achieve high organizational outcomes.

Besides reading Accelerate: The Science of DevOps, we encourage others who wish to integrate these technologies in businesses and projects, or are on the cusp of a DevOps transformation, to consult research from Google’s DORA team. These reports are informative and will convince others, as well as yourself, on the value that Agile and DevOps practices can provide to your organization. In addition to this, there are several recordings of Nicole Forsgren’s visionary and pragmatic presentations that share insights into the key leadership, technical, architectural, and product capabilities that enable high-performing IT teams to decisively outperform low-performing peers.

The transition to Agile and DevOps is undoubtedly challenging, but they deliver benefits and real value to organizations. With each day that passes, it will get easier to understand and efficiently implement these practices so you can remain innovative and stay on top of your game. As of now, we already enjoy faster releases, CI/CD pipelines, and continuous integration with our Agile and DevOps transformation. We aim to fully achieve continuous delivery and maintain our high performances in technology delivery and organizational outcomes.

Share this article on

Join us

Reach your full potential and value. Visit our Careers page.

Agile and DevOps, the Game Changers that Drive Innovation

Agile and DevOps, the Game Changers that Drive Innovation

The latest buzzwords on the lips of CEOs, developers, and businessmen are Agile and DevOps. But why are they so popular and what value do they actually bring to organizations?
Broadly speaking, both concepts include practices with the common aim of getting end-products out as quickly and efficiently as possible. Yet, to zero in on the targeted goals, Agile and DevOps work differently, employ various approaches, and function distinctly in a team-based environment.

Although many businesses are eager to take on these practices, it’s essential to make a calculated and well-researched decision before making the transition. In this article, we’ll provide an analysis of both concepts, the benefits, as well as what sets them apart, and our experience with implementing them.

The Agile methodology

The concept came to life in the 1990s as a response to the growing needs of the software development industry. In 2001, the Agile Manifesto was written by 17 independent-minded software practitioners who uncovered better ways of developing software while helping others to do it. In the manifesto, they introduce 12 principles and 4 values that would provide an overview of expectations in Agile development lifecycle practices. The values are:

  • Individuals and interactions over processes and tools.
  • Working software over comprehensive documentation.
  • Customer collaboration over contract negotiation.
  • Responding to change over following a plan.

Basically, it is a collection of methodologies based on the best practices at the time. It unifies extreme Programming (XP), an aggregation of Scrum and other heavyweight software development processes, to combine the most progressive and efficient approaches into a single set of principles. So, whoever applies any type of Agile methodology adheres to these values and principles that make up the Agile software development culture.

Agile benefits

To better understand this concept, Tom Hall, a DevOps advocate and practitioner, explains: “Agile is an iterative approach to project management and software development that focuses on collaboration, customer feedback, and rapid releases… In an agile approach, some planning and design are done upfront, but the development proceeds in small batches and involves close collaboration with stakeholders.”

Here are six clear benefits that we can enjoy from incorporating this approach:

  • Software can be remediated in real-time. Unlike the waterfall approach, which doesn’t move fast enough to meet customer demands, Agile encourages rapid and flexible responses to change.
  • Focus on business value by continuously aligning development with customer needs and trends.
  • Efficiently deliver the optimum high-quality software product through Sprints that allow teams to easily notice and respond to unpredictability.
  • Transparency for all parties involved through user stories. They help achieve cross-team clarity on what to build, for whom, why, and when.
  • Reduced risks leading to increased customer satisfaction.
  • Predictable costs and schedules. Clients can more readily understand the approximate cost of each feature and each Sprint has a fixed duration.

DevOps Practices

The DevOps concept is more recent than Agile and arose due to the increasing need for innovation in the available rigid frameworks. For years, development and operations teams remained separate with different (and often competing) objectives, department leaderships, and key performance indicators, which would all lead to dysfunction and frustration.

The term DevOps gained popularity in 2007 and 2008, when the two communities (led by people like Patrick Dubois, Gene Kim, and John Willis) united and eliminated this pattern of siloed teams and broken lines of communication within organizations.

The principle of DevOps is to bring development and IT teams together, connecting the skills, tools, and processes from every facet of an engineering and IT organization. View it as “an evolution of agile practices, or as a missing piece of agile.”

At its core, DevOps is about creating a culture of improved communication and processes with better information sharing across teams and companies. It promotes a philosophy of agility, automation, transparency, communication, and efficiency.

In a survey of 500 DevOps practitioners, Atlassian found that 50% of organizations say they’ve been practicing DevOps for more than three years.

DevOps benefits

On the technical side, the advantages are:

  • Faster resolution of problems because team members don’t need to wait for a different team to troubleshoot and fix the problem.
  • Continuous software delivery as code changes are automatically built, tested, and prepared for a release to production.
  • Less complexity because DevOps streamlines processes, generating efficient releases that ensure quality builds.

Business-wise, the clear benefits are:

  • Features are delivered faster, which can lead to increased revenues.
  • The operating environment is more stable and lowers costs.
  • DevOps streamlines processes, generating efficient releases that ensure quality.
  • Drives business innovation, increasing sales as well as employee and customer engagement.

What sets Agile and DevOps apart?

As we’ve explained, Agile and DevOps are not fundamentally opposed because their goals are the same: to improve and boost the productivity of a business. Yet, they follow a unique approach and contribute differently to the same purpose. The differences and similarities between the two manifest as follows:

– Agile concentrates the flow of software from ideation to code completion.
– Agile puts focus on collaboration between developers and product management.
– Agile provides structure to planned work for developers.
– Agile emphasizes iterative development and small batches.

– DevOps expands the focus to delivery and maintenance.
– DevOps includes the operations team.
– DevOps incorporates unplanned work common to operations teams.
– DevOps concentrates more on test and delivery automation.

Expert Network’s experience with Agile and DevOps

A couple of years ago, we began transitioning to Agile and integrated DevOps practices later on. With Agile, we organized courses so that all teams could familiarize themselves with the methodology and guidelines. This was necessary in order to make the right adaptations depending on ‘the profile’ of each project. We also held meetings where we would voice our concerns, the results obtained, and learn from each other to come up with appropriate solutions.

Integrating a new perspective wasn’t a walk in the park because it meant rewiring our approach and embracing many different changes. In addition, we also needed to be in agreement with our clients and explain to them why such an upgrade was beneficial to everyone. Luckily, once we were all on board and began enjoying the impactful, positive changes, there was no going back. Some of the improvements we have enjoyed:

  • Tasks are better defined and easier to accomplish.
  • Planning is more straightforward, making our releases more frequent and with faster-delivery feedback.
  • With increased transparency, all parties involved have access to the roadmap and changes can be easily picked up and implemented.

In a similar manner, when we took a more hands-on approach to DevOps, we wanted to make sure the transition would be accessible, easy to follow, and accompanied by guidance and support. We were already on Azure, using cloud infrastructure for project management and provisioning the resources we needed. Still, fully embracing DevOps takes time, is a continuous job, and necessitates constant testing and adapting to come up with the best course of action for each situation.

Yet, initially, our main objective was to integrate certain practices that would ease and streamline the whole working scheme. We focused on the technical implementation of DevOps, holding internal presentations and meetings where we introduced the new requirements, tools, and how they could be included with each team. We created performance metrics to monitor how performant teams are, where they needed assistance, and how we could intervene to help. At the moment, we already enjoy faster releases, CI/CD pipelines, continuous integration with the aim of fully achieving continuous delivery.

We couldn’t neglect the cultural impact that comes with these technologies. That is why we are constantly building our mindset around psychological safety, how to enable autonomous teams, and rethink the team leadership structure to encourage the teams to work towards a common goal through their vision and values, mindful of their needs.

So far, we have shown high performance in technology delivery and, ultimately, we wish to maintain our high organizational outcomes that lead to our clients’ success stories. With each day that passes, we become better and better at understanding these practices and how to efficiently implement them to innovate and stay on top of our game.

All things considered, it is undeniable that both Agile and DevOps shook things up by breaking through traditional ways of tackling software development. Together, Agile and DevOps have a considerable impact on how individuals approach collaboration models and how effectively they implement the needed digital transformation.

Share this article on

Join us

Reach your full potential and value. Visit our Careers page.

The Significant Impact of CI/CD Pipelines with CC4ALL

The Significant Impact of CI/CD Pipelines with CC4ALL

Over the last couple of years, automation in the form of continuous integration, delivery, and development (CI/CD) has become increasingly popular and important. Its rise in popularity is related to its ability to create a seamless and efficient DevOps process, from writing code to deploying it to production. This encourages higher quality application development, faster release times, and improves the security and stability of delivery capabilities. As described in a previous article, the result of a recent project with our client CC4ALL was their transition from a monolithic to a microservices driven architecture. Although this upgrade has ensured greater efficiency, application flexibility, and improved performance for them, it has also had a particular impact on the way their CI/CD pipelines are defined.

So, how have we adopted CI/CD in our project with CC4ALL and what is the effect of CI/CD pipelines?

Why CI/CD pipelines were implemented with CC4ALL

CI/CD are an essential part of our DevOps approach and practice to enable continuous delivery of value. A CI/CD pipeline is a series of steps that need to be performed in order to deliver a new version of a software. Their true value is in introducing ongoing monitoring and automation to improve the process of application development, delivery, and deployment. Whilst “CI” always refers to continuous integration, “CD” refers to continuous delivery and/or deployment (related concepts that are sometimes used interchangeably).

The implementation of CI/CD pipelines have been on the rise over the past few years due to the benefits and advantages they provide. Back in 2017, a survey of DigitalOcean’s community found 42% used CI/CD. Of those who did not use CI/CD, 38% had plans to implement it in the future and 46% did not believe these methods were necessary for their workflow. It’s not easy to define the depth of CI/CD adoption and implementation today since teams do not necessarily use all three concepts. A 2019 survey by mabl of 500 software testers revealed that 53% of teams use continuous integration, while only 38% have continuous delivery and 29% continuous deployment. Regardless, it’s undeniable that interest in CI/CD has increased steadily over the years, as exemplified by Google search trend results.

More recently, the global COVID-19 crisis has accelerated the need for cloud and CI/CD automation. The Flexera 2020 State of the Cloud Report found that 39% of IT leaders saw implementing CI/CD in the cloud as a top priority. Rapid evolvement in the tech world and the shift toward a growing number of microservices-based applications are also spurring the adoption of CI/CD platforms. Despite this, many companies today still do not use CI/CD unless they have a mature adoption of microservices and/or containers. This is largely because the complexity and challenges of employing cloud-native microservice architectures can be overwhelming.

In our project with CC4ALL, the milestone of transitioning towards a microservices architecture was already underway. Although there are many approaches to using containers and CI/CD pipelines together, CC4ALL uses microservices architecture with deployment in Azure Kubernetes Services. The greatest challenge was the prolonged period it took to define the pipelines for this configuration. Our DevOps team spent weeks solely on this complex and time-consuming task. But the impact and benefits of the final result was well worth the time and effort. Automation of the whole process has eliminated repetitive manual processes and the team has more time to focus on offering new and improved versions, without downtime, in a short period of time (minutes rather than hours or days).

Let’s dive a little deeper into what each part of the CI/CD pipeline means and how it has affected CC4ALL.

Continuous Integration

Continuous integration (CI) is the practice of automating the integration of code changes from multiple contributors into a single software project. It’s one of the most important DevOps practices since it allows developers to merge code changes frequently into a central repository where build, tests and other steps will then run.

The main purpose of the CI pipeline is to generate a build artifact that packages the components of the application. In the case of CC4ALL, the artifacts are versioned Docker container images that represent each microservice of the platform, which are then pushed to Azure Container Registry. The mechanisms that ensure the integration of code changes are compiling, unit testing, and static code analysis. If any of the steps fail, the process stops and no artifacts are generated, which enables the team to discover and fix the root cause.

Here’s a list of CI best practices:

  • fix the broken builds
  • run tests locally before committing
  • keep the builds fast
  • commit early and commit often

Continuous Delivery

Continuous delivery is an extension of continuous integration. It automatically deploys all code changes to a testing and/or production environment after the build stage. This means that on top of automated testing, there is an automated release process and an application can be deployed at any time by clicking a button.

With modern platforms such as Kubernetes, the separation of environments might not be physical when compared to legacy or traditional machine-based platforms. A namespace (software separation) might be all that separates different testing and/or acceptance environments.

For CC4ALL, the continuous delivery pipeline relies on the Azure DevOps server, which coordinates all the deployment tasks. Since the platform uses Kubernetes to orchestrate the deployment of microservices, tasks only consist of connecting to the Kubernetes cluster and then applying the Helm charts that describe the new version of the application. The lower the number of steps to deploy a new release, the better.

Continuous Deployment

Continuous deployment goes one step further than continuous delivery. With this practice, every change that passes all stages of the production pipeline is released to customers. There’s no human intervention and only a failed test will prevent a new change from being deployed to production. It’s an excellent way to accelerate the feedback loop with customers and take pressure off the team.

Here are a few continuous deployment best practices:

  • ensure the number of steps to deploy the software are minimal
  • use rolling deployment that enables incremental replacement of old versions
  • maintain the staging environment as closely as possible to the production environment
  • avoid using environment-specific builds

How it all comes together in software delivery

CI and CD are often mentioned together since they are different yet related stages in modern software delivery pipelines. The CI pipelines generate an artifact that is then used by a delivery pipeline to make the new version of an application available. Automating the whole pipeline ensures a quick feedback loop that enables the early detection of possible issues and a faster release rate. Using modern tools (such as Azure DevOps, Kubernetes, and DevOps practices) also makes it easier than ever to set up CI/CD pipelines, even in the case of complex architectures like microservices.

By combining a microservices architecture with CI/CD pipelines, CC4ALL has enjoyed the overall impact of a more advanced, high-performance application. Since all the processes are automated, the team can scale up and update the app without downtime.

Share this article on

Join us

Reach your full potential and value. Visit our Careers page.

The Benefits and Challenges of Transitioning to Infrastructure as Code (IAC)

The Benefits and Challenges of Transitioning to Infrastructure as Code (IAC)

IT infrastructures are traditionally managed and configured manually, which made processes time-consuming, costly, inconsistent, and vulnerable to human error. But emerging computing revolutions and trends in recent years, such as cloud computing, have improved and transformed the provisioning of IT infrastructure. At the beginning of 2020 we decided to update the entire infrastructure of our system and went with Azure Kubernetes Service (AKS). The transition has caused a shift in our operations environment since AKS follows infrastructure as code (IaC) principles, which was something we weren’t used to following. Although the transition was not without challenges, we and our customers have experienced surprising benefits from the implementation of this new infrastructure.

But what does IaC really mean, and how has it had a positive impact on us and our customers?

What is infrastructure as code (IaC)?

Infrastructure as code (IaC) is an automated type of infrastructure management using configuration files. In other words, it uses high-level descriptive coding language to automate the provisioning of IT infrastructure. This automation frees IT personnel and developers from having to manually configure, manage, and monitor infrastructure elements. As infrastructure configuration takes the form of a code file, it can be rapidly created, tracked, and deployed in the same way we do with the application code. In this way, IaC brings the repeatability, transparency, and rigorous testing of modern software development to the management of infrastructure.

The positive impact and benefits of IaC

Traditional manual IT practices are notoriously costly, slow, and inconsistent. In a time where hundreds of applications are deployed daily and infrastructure is constantly torn down or scaled up and down in response to developer and user demands, automation is essential to remain competitive. IaC makes this automation possible in infrastructure management and can be associated with many other benefits.

  • Speed and efficiency. Provisioning and management are faster and more efficient with IaC automation compared to manual processes. Since the complete infrastructure of every environment is defined in code, every phase of the software delivery cycle is accelerated. Time-consuming tasks are reduced to mere minutes so DevOps teams can deliver applications and their supporting infrastructure rapidly and at scale.
  • Control and lower cost. IaC lowers the cost of infrastructure management by automating tasks previously performed manually by specialized professionals. It frees developers from slow, error-prone manual tasks so they can focus on developing innovative software solutions instead.
  • Reduce risks and errors. IaC and automation reduces risks, human error, and prevents runtime issues or security vulnerabilities caused by configuration drift or missing dependencies.
  • Consistency. The repeatability of IaC guarantees that the same configurations will be deployed constantly without discrepancies. This allows the rapid creation of consistent infrastructure and environments, which also helps in avoiding unique “snowflake” configurations or configuration drift caused by unintentional manual mistakes.
  • Validation and Tracking. Since the infrastructure is versioned, workflow is transparent and changes can be easily audited and tracked. The who, when, and why changes were recorded and can be reviewed, providing a history of how environments were built, increasing awareness of changes to infrastructure.

The challenges faced in our transition to a new infrastructure

Our transition has contributed to the increased scalability, efficiency, performance, and consistency of our processes, but the journey up to this point was not without challenges.

Moving to an IaC infrastructure means no more fancy user interfaces to help with certain deployments. For example, it is much easier to deploy a virtual machine (VM) using the Microsoft Azure portal instead of writing code, since there is a lot of room for mistakes when writing code.

Another obstacle was allocating enough time to overcome the learning curve. IaC requires additional tools and advanced coding skills that need to be learned and practiced. At the beginning especially, we had to invest time into learning how to code our infrastructure since the code files (JSON, YAML, etc.) needed to follow a precise structure. Plus, resource provisioning can take up a lot of time, and patience is necessary to prevent avoidable mistakes.

Moreover, migrating to a new infrastructure has also increased the complexity of our system.
In order to be sure that our system is future-proof, we had to migrate to other frameworks, using Linux instead of Windows, and rethink essential parts of our system such as monitoring, caching, and networking. All those changes proved to be a long and demanding task but in the end, we are more than happy with what we achieved.

Integrating IaC into projects

We have integrated IaC into several projects and we will briefly use our collaboration with AutoDialog to demonstrate the benefits our customers have experienced as a result. It’s worth noting here that amongst the many additional tools available, AutoDialog uses IaC only with AKS, something we wish to expand in the future.

AutoDialog is a software for garages, car dealers, and their customers that makes processes and communication more transparent, simple, and efficient. It provides a dialogue between customers and car dealers or garages so customers can gain insight into the work process through an intuitive interface and fun-to-use platform. It connects all underlying systems so schedules, quotations, checklists, updates, and more are communicated mutually, transparently, and quickly.

Within AutoDialog, they have experienced several benefits from the implementation of this new infrastructure:

  • In the past, they had to deploy their infrastructure in another environment, which ate up time and introduced complexity or errors. With IaC, they can now deploy their infrastructure easily within a couple of minutes, and clean-up is even faster.
  • Since the infrastructure is versioned, they have full traceability and workflow transparency. The who, why, and when infrastructure changes are made is tracked and every change is reviewed when the pull request is made. This ensures that everyone is aware of changes and there aren’t any surprises later down the road.
  • The infrastructure code is readily available for everyone and improvements are welcomed.
  • It has freed developers from performing manual, error-prone tasks, and ensures that the same mistakes are never made again.

Why IaC might be for you

As more organizations strive to respond with speed to opportunities and competitive threats, many are turning to IaC automation to help facilitate this. Using the right IaC tools will automate infrastructure provisioning, which in turn helps to control costs, save time, and reduce errors. IaC automation also accelerates the software delivery process, provides workflow transparency, and enables teams to deliver stable environments rapidly and at scale. With the integration of IaC, developers and teams are freed from time-consuming and error-prone manual tasks, so they can instead focus on developing innovative solutions that add value to clients.

Share this article on

Join us

Reach your full potential and value. Visit our Careers page.

Why Customer Onboarding is Crucial and How We Do It

Why Customer Onboarding is Crucial and How We Do It

For many companies today, software is a vital gear in the core of their business. Developing one’s business software can be part of a strategic initiative or a new business opportunity, as it assists in increasing the efficiency and effectiveness of a company’s activities. The downside of these types of projects is that they require high budgets, great amounts of effort from everyone involved, and the right IT supplier and partner. It’s particularly important to find an IT supplier and partner that you can collaborate with successfully so they can deliver your desired results. How can you identify the right and best-fit IT supplier and partner for you?

This is where customer onboarding comes into play. How a company conducts their customer onboarding experience reveals a lot about how dedicated to clients, trustworthy and professional they are. The initial steps and touchpoints covered during the customer onboarding process are also a crucial hint as to how successful of a collaboration it would be.

What does customer onboarding mean to us?

To us at Expert Network (EXN), customer onboarding is a gradual and effective process of showing clients everything we have to offer and teaching them how to get the most out of our services. It’s about focusing on clients by listening to and answering their questions or concerns, then using a strategic plan to support and guide them so clients receive the results they expect from our collaboration. The way new customers are onboarded sets the tone for our ongoing relationship, which is why we place importance on the process and go to great lengths to ensure the experience is as smooth as possible.

The onboarding experience is a make-or-break point. A positive journey is confirmation for clients that they have made the right choice. It sets clients up for success, increases overall lifetime value for them, and turns new users into raving fans. A content clientele will also come back for more, tell their friends, and help to reduce churn. On the other hand, a negative one creates unhappy and frustrated clients who are unlikely to return.

Since this is the case, we approach our onboarding plan strategically with the goal of achieving success and define that success with:

  • higher levels of software delivery performance
  • the development of a useful product we can place in the hands of customers
  • working software from every sprint and something to demonstrate progress
  • reduced release and deployment pain
  • healthy communication and collaboration
  • reduced team burnout and better work recovery
  • the assurance that we make constant progress on milestones and targets
  • a performance-oriented culture, where every individual has a strong identification with the company

Our approach to the onboarding plan

In order to provide a successful onboarding transition, we use a specific strategy and plan so that all parties involved are informed, in agreement with, and aware of what’s going on every step of the way. For us, each phase of development in the onboarding process revolves around two keywords: transparency and trust. We focus on our customers and guide them through each phase, clearly communicating what we do, why we do it, and how we do it.

The initial phase covers basic touchpoints with the client:

  • their vision, goals, and current pain points
  • timeline (if any)
  • technology stack
  • design requirements and status
  • technical requirements and status
  • non-functional requirements

We then follow up this initial phase with a discovery workshop. In the discovery workshop, we take a closer look at a client’s current business needs, their future plans and goals. We also consider what their ideal solution, timeline, and budget looks like for them to achieve those goals. The workshop focuses on delving deeper and really getting to know our client, then demonstrating the value we can provide them. Another key and important element at this point is managing the expectations of all parties involved in the project. By communicating and managing clear expectations, we can successfully and continuously deliver on what our customers have purchased within the agreed-upon conditions.

How we do it: our customer onboarding methodology and processes

For a successful digital delivery, methodology and processes are as important as the people involved and the technologies used. Although the specific flow of each project will vary slightly depending on a client’s unique needs and preferences, our overall methodology and processes constitute of these elements: delivery approach, delivery cycles and Agile methodology, collaboration tools, project governance meetings, and aligned team leadership.

1. The delivery approach establishes the cone of uncertainty to guide estimates and manage expectations on a project’s scope, cost, and time it will take to complete.

2. Our delivery cycles and Agile methodology is presented to clients so they gain a clear understanding of the principles, practices, meetings, and delivery cadence that they can expect.

3. A variety of collaboration tools are utilized throughout the process. These can be divided by purpose as engineering (Microsoft Suite, TestRail, Azure, GitHub, Microsoft TFS, PhpStorm, etc.) or communication (Google Meet, Teams, Skype, Zoom, WhatsApp, etc.) tools.

4. Governance meetings are conducted with a monthly frequency, or as otherwise agreed. Project governance is the infrastructure that surrounds a project, dealing with: setting the right expectations, finding alignment between parties, projecting high-level roadmaps or status updates, the escalation of topics that couldn’t be solved by the delivery team, and addressing blockers or other opportunities.

5. Team leadership in projects is a combination of four roles and perspectives: DevOps Coach (EXN perspective), Product Owner (client operational perspective), Development and QA Leads (delivery perspective), and Account Manager (governance perspective). Through monthly meetings, or ad-hoc when requested by a team member, continuous alignment is ensured between these four perspectives throughout the project.

The wrap up of a successful customer onboarding experience

Thus far, we’ve elaborated in detail on our considerations, strategy, methodology, and processes to conduct successful and positive customer onboarding experiences. Customer onboarding is all about continuously delivering on their expectations within agreed-upon conditions. It’s about showing clients what we have to offer, teaching them how to get the most out of our services, and supporting and guiding them so they’re set up for success.

By being transparent about our approach to customer onboarding, we hope current and future clients now have a greater understanding of what to expect from a collaboration with us. We face each project with dedication and professionalism, which has earned us many satisfied customers that place a great deal of trust in us.

Share this article on

Join us

Reach your full potential and value. Visit our Careers page.

Promoting Flexibility, Scalability, and Efficiency with Microservice Architecture

Promoting Flexibility, Scalability, and Efficiency with Microservice Architecture

New technologies and developments emerge on a daily basis. But amongst the vast and continuously growing number of options out there, what will add innovation and improvements for clients? Before adopting or implementing new technologies, we place importance on carefully analyzing and researching them. This way, we can consider and align clients with the best solutions for their unique needs and challenges.

The beginning of our journey with ContactCenter4ALL (CC4ALL)

CC4ALL’s ambition is to connect the world and make an impact as a leader in customer contact solutions for Microsoft platforms. They deliver call center solutions through Microsoft Teams, Dynamics, and Skype for Business to the customer contact center market.

We’re currently in the process of redeveloping CC4ALL’s solutions to fit a cloud-native platform based on container technology. However, the monolithic architecture of their application forces our team to stick with the original technology used during its initial design and deployment. As their current monolith application continues to grow, it becomes increasingly inflexible and difficult to scale, in turn causing difficulty in business growth.

Our solution was to propose a journey towards a new architecture: to transition from a monolithic to a microservice-oriented architecture.

What does ‘monolithic’ and ‘microservice’ architecture mean?

In this context, monolithic means composed all in one piece. A monolithic architecture is built as a single, large unit; a traditional unified model for the design of a software program. It’s designed to be self-contained and tightly-coupled, with interconnected and interdependent components that are often deployed all at once.

A microservice architecture uses a suite of small, modular units of code that can be deployed independently from the rest of a product’s components. Each module in the application is built around specific capabilities and runs on its own code-base. Its loosely coupled design allows developers to change or update modules without affecting other parts.

Approaching the challenge of a new architecture and tech vision

Although microservice architectures have experienced an explosion in growth, they’re not always the best-suited solution for every context. Monolithic architectures are simple to develop, test, and deploy, making them ideal for small teams or to quickly test unproven products and proof of concepts. But microservice architectures are proven to be extremely superior systems, especially for large enterprise applications. After all, microservices are easy to scale, faster to develop, and they often perform better than monolithic applications.

Since CC4ALL’s monolith application was impeding on its flexibility and scalability, upgrading to a new microservice-oriented architecture was a natural progression. The new architecture would comprise multiple autonomous microservices, creating a flexible cross-platform for both the server and the client. Our roadmap for this approach also utilizes Azure Kubernetes Service (AKS) to simplify the deployment and management of this new microservices-based architecture.

While microservice architectures are generally more agile than monoliths, they introduce a level of complexity with their own set of challenges. Each microservice requires its own infrastructure, dedicated continuous integration and delivery pipeline, and monitoring process. This results in a slower initial deployment for the application as the operational complexity of such a system takes time and meticulous attention to executing. In order for the distributed system to operate smoothly, a microservice architecture needs to be implemented carefully and correctly.

Impact for the client and end-user

Clients are at the center of everything we do – the reason why we continuously strive to offer novel solutions and improvements through innovative and best-suited technology. CC4ALL is now building on top of container technology, but it’s borderline impossible to deploy and manage all the containers manually. When considering specialized software to orchestrate and manage deployments at an enterprise level, AKS is the first choice that comes to mind.

Kubernetes makes it easier for developers to test, maintain and publish container-driven applications. It accelerates containerized application developments and helps developers to easily define, deploy, debug, and upgrade even the most complex applications. It enables the creation and deployment of large-scale containerized applications and maintains them so they run reliably forever, providing a high degree of automation. Kubernetes can also optimize the use of resources to reduce costs. Instead of keeping unnecessary machines running, Kubernetes frees up these resources and uses them for other tasks. Through auto-scaling, Kubernetes ensures it doesn’t use more resources than necessary.

Another crucial factor is that Kubernetes has the ability to scale quickly. Kubernetes can provide additional instances and support fast, so the system doesn’t crash in the event of an exceptionally high amount of traffic. The environment can also be redeployed based on templates, allowing additional new production environments for customers to be deployed within minutes. The technology provides a high degree of flexibility, resiliency, and efficiency by streamlining horizontal scaling, self-healing, load balancing, and secret management.

Long-term advantages of the new microservice architecture

The long-term advantages of this new microservice oriented architecture include:

  • scalability and flexibility
  • continuous delivery
  • the advantage of automated technological updates
  • improved fault isolation
  • developer independence

Microservices are an excellent option for situations where developers can’t fully predict what devices will be accessed by the application in the future. They allow quick and controlled changes to the software without slowing down the application as a whole. In comparison, one misbehaving or changed component in a monolithic architecture can bring down the entire system.

One of the greatest benefits of microservices is that they permit magnified application flexibility and performance. A microservice architecture lets developers decompose an application into independently executing services. These individual microservices can be updated easily and the resulting update can be placed into production without the need for lengthy integration work across different development teams. New languages and frameworks can also be used as soon as they are released to build out new components and services.

With this upgrade from a monolithic to a microservice architecture, we ensure greater efficiency, application flexibility, and improved performance. Its benefits have helped us to meet our project’s goals with ease and proficiency, delivering high standards and customized solutions to clients.

Share this article on

Join us

Reach your full potential and value. Visit our Careers page.