Teamworks is a VC-backed startup founded in 2004 that provides an engagement platform built by athletes for athletes. This worldwide collaboration software is designed to make everything easier for elite athletic teams to operate effectively and efficiently – from scheduling and communication, to sharing files and managing travel. It helps more than 3,000 elite athletic teams connect and collaborate so they can focus on winning. Learn more about Teamworks www.teamworks.com.
Software, Sports, Education
Better integration with AWS services to improve performance efficiency and scalability.
Migration, DevOps — CI/CD
Improved performance efficiency
Want to achieve benefits like these? Schedule a free Application Modernization Assessment with nClouds to learn how to build sustainable systems for delivering better software faster.
To provide a superb collaboration platform to our customers, it’s critical for the Teamworks app to excel in performance efficiency and scalability. With nClouds’ expertise in migration and DevOps, we were able to optimize our app to deliver high availability, low latency, consistent performance, and scalable capacity.”
Site Reliability Engineering (SRE) Manager, Teamworks
Teamworks was implementing continuous integration (CI) in their architecture and wanted better integration with AWS services to improve performance efficiency and scalability.
Teamworks was introduced by their AWS Account Manager to nClouds, a Premier Consulting Partner in the AWS Partner Network (APN). After conducting a discovery meeting, the nClouds team identified several key ways that Teamworks could modernize its infrastructure. Teamworks was impressed with the pre-sales engagement and decided to move forward.
Teamworks required a modernized software architecture to improve performance efficiency and scalability.
nClouds partnered with Teamworks to build out an Amazon EKS cluster in the staging environment using Terraform, and perform infrastructure buildout and migration in the prod environment.
Teamworks’ existing application stack was migrated from Amazon Elastic Compute Cloud (Amazon EC2) to Amazon EKS, following best practices for migrating, configuring, and deploying applications to Kubernetes.
nClouds revamped Teamworks’ CI/CD by implementing new CI/CD pipelines in GitLab for all services in the stack (porting the functionalities formerly handled by Jenkins), migrating source code from Bitbucket to GitLab, and integrating the existing monitoring and log aggregation tools (Prometheus, Grafana, Greylog) with the new architecture. Kubernetes provides zero-downtime deployment. Helm simplifies deploying applications to Kubernetes.
Teamworks’ new architecture includes an Amazon VPC with three Availability Zone (AZs) on AWS within the Auto Scaling group. A private subnet resides in each AZ. The stateless services — Lambda and Amazon S3 — run on the Amazon EKS cluster. Load balancing is handled by AWS ALB in place of the existing HAProxy load balancer.
Teaming with nClouds, Teamworks now has a modernized architecture on AWS. The project has yielded numerous benefits:
Amazon S3’s features reduce latency and increase throughput. Cloudfront improves overall caching performance, reduces the load on the origin, and minimizes latency. Datadog monitors, troubleshoots, and optimizes end-to-end application performance.
Multiple AZs enable Teamworks’ production applications and databases to be highly available, fault-tolerant, and scalable. An AWS Auto Scaling group automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost.
As traffic to Teamworks’ application changes over time, AWS ALB scales workloads automatically. ElastiCache for Redis scales Teamworks’ cache to match demand. Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes that are responsible for starting and stopping containers, scheduling containers on virtual machines, storing cluster data, and other tasks. AWS uses advanced Ethernet networking technology, which is designed for scale, security, high availability, and low cost.
Within Kubernetes, a Cluster Autoscaler scales the size of the Kubernetes cluster up and down as needed based on different constraints, a Horizontal Pod Autoscaler scales the number of pods available in a cluster in response to the present computational needs, and a Vertical Pod Autoscaler allocates more (or less) CPUs and memory to existing pods.
You can also email us directly at firstname.lastname@example.org for your inquiries or use the form below