Projects

DedicatedCV

Full-stack developmentAzureDevopsPythonReactPostgreSQL

Built alongside Lea Aboujaoude, Anze Zgonc, Mathew Porteous, Boris Gans, Fares Qaddoumi.

As entrants to the job market, we have experienced that creating good CVs and keeping track of their different versions can be unpleasant. So, we built DedicatedCV, a web-app that helps professionals create their best CVs and manage them easily. After signing up, a user can create a new CV by filling in the corresponding sections (name, summary, experience, skills, etc). In each field, the user can use the AI-enabled enhancing feature that will help them optimize their CV. After that, the user can preview their CV in multiple styles. At this stage, they can also view a tailored feedback panel that gives them insight into how they can improve their CV. They can also translate their CV to a different language at the click of a button. Finally, they can export their CV as PDF, or generate a link to their CV, allowing them to share access to it globally. The submitted demo video https://amrgh.me/devops-demo depicts this user experience. The product is fully functional and live so you can try it yourself.

We implemented Authentication and authorization through JWT-based authentication with Argon2 password hashing. When a user requests a CV, the system first validates their JWT token, extracts their user identity and confirms they own the requested resource before processing the request. This approach prevents unauthorized access while maintaining stateless authentication that scales horizontally.

The CV builder supports unlimited CVs per user each containing four customizable sections. Work experience entries capture company, position, location, dates, and detailed descriptions. Education records accommodate degree information, honors, relevant coursework, and thesis titles, with GPA support for academic CVs. Skills can be categorized by type and proficiency level, while projects showcase technical work with technology stacks and descriptions. Each section supports custom ordering allowing users to prioritize their most relevant experiences. The system includes CV duplication and automatic cleanup when CVs are deleted to maintain data consistency.

The backend exposes a RESTful API with 35 endpoints providing complete CRUD operations. All endpoints except authentication and health checks require valid JWT tokens. The API validates email formats, GPA ranges, and date formats at multiple levels, with nested data retrieval allowing the frontend to fetch complete CVs with all sections in a single request. Security measures include CORS configuration for authorized origins, protection against SQL injection, and OAuth2 password flow following industry standards.

Architecture & Development

The application follows a three-tier architecture pattern that separates concerns into distinct layers.

The Presentation Layer contains the React frontend utilizing TanStack Router for navigation and TanStack Query for server state management. The Application Layer contains the FastAPI backend organized into modular endpoints for resource handling, SQLAlchemy models for database representation, Pydantic schemas for validation, and core utilities for configuration and security. The Data Layer contains a PostgreSQL database with seven relational tables managing user accounts, CVs, and associated sections. Communication flows via RESTful HTTP APIs using JSON payloads, with JWT tokens providing stateless authentication. The backend connects to PostgreSQL through SQLAlchemy ORM, with Alembic managing schema migrations.

Presentation Layer

The frontend uses React 19 with TypeScript and Vite as the build tool. TanStack Router provides file-based routing with automatic code splitting, generating type-safe route definitions from the /routes directory structure that separates authenticated (/app) and public (/auth) flows. TanStack Query manages server state through custom hooks organized in /hooks that encapsulate query keys, mutations, and cache invalidation logic. The /lib/api directory contains a centralized ApiClient class handling HTTP requests, JWT token injection, and automatic 401 redirect flows. Zod provides runtime type validation while Radix UI primitives deliver accessible component foundations. The component architecture separates concerns across /components/ui for reusable primitives, /components/cv for domain-specific forms, and /components/auth for authentication flows. TailwindCSS handles styling with a utility-first approach. The modular organization ensures components remain decoupled from API implementation details through the service layer abstraction in /lib/api/services, allowing backend endpoint changes without touching component code.

Application Layer

We chose FastAPI as its core framework. FastAPI’s seamless Pydantic integration provides request validation with minimal boilerplate and its async capabilities handle concurrent requests efficiently. Pydantic v2 handles all data validation and serialization. SQLAlchemy 2.0 serves as the ORM layer. Alembic manages database migrations. For authentication, we chose Argon2 over bcrypt for password hashing due to superior resistance to GPU-based cracking attempts.

The backend follows a modular organization across five directories.

  • The /api/v1/endpoints directory contains resource-specific route handlers. auth.py manages authentication flows, cvs.py manages CV operations and separate files for education, skills, work_experiences, projects, and health checks. This separation ensures changes to one resource type never require modifications to another.
  • The /models directory manages SQLAlchemy database models defining table structures and relationships.
  • The /schemas directory contains Pydantic validation schemas that define API contracts.
  • The /core directory provides cross-cutting functionality: config.py for environment-based settings, security.py for authentication utilities, deps.py for dependency injection definitions, and monitoring.py for Azure Application Insights integration.
  • The /services directory stands ready for business logic as complexity grows though current CRUD operations remain in endpoint handlers.

The modular design demonstrates SOLID principles through concrete implementation choices.

  • SRP by separating between models, schemas, endpoints, and services.
  • OCP by implementing Pydantic schema inheritance : CVBase → CVCreate, CVUpdate, CVResponse and the middleware architecture (MonitoringMiddleware integrated without touching existing endpoint code)
  • ISP through the schema design : CVCreate includes only required fields for creation, CVUpdate makes all fields optional for partial updates, and CVResponse includes computed fields like IDs and timestamps.
  • DIP through FastAPI’s dependency injection system in deps.py : Endpoints depending on abstractions like get_db() and get_current_user() rather than concrete implementations.

Data Layer

PostgreSQL serves as the data tier. PostgreSQL provides superior handling of concurrent transactions which is essential when multiple users simultaneously edit different CVs.

The database schema consists of 7 tables. The alembic_version table tracks migration state. All relationships implement cascade delete constraints to ensure that removing a CV automatically cleans up all associated sections without leaving orphaned records.

The database schema centers on the User table’s one-to-many relationship with CV and CV’s one-to-many relationships with the four section types.The User table enforces email uniqueness through a unique constraint. The CV table indexes user_id for efficient querying of all CVs belonging to a user. The four dependent tables each index cv_id for fast section retrieval. All entities inherit standard fields (id as primary key, created_at and updated_at timestamps) from a base model class to ensure consistent behavior across tables.

Testing

The application employs a comprehensive automated testing strategy across the entire codebase. The backend test suite consists of 123 tests organized into 8 modules using pytest and FastAPI’s TestClient, with each test running against a fresh SQLite in-memory database for complete isolation. Testing coverage includes authentication (16 tests for JWT token handling and OAuth2 flow), authorization (40 tests ensuring users can only access their own resources), CRUD operations (83 tests covering all 35 API endpoints), data validation (24 tests for email formats, GPA ranges, and required fields), and edge cases (15 tests for error handling and cascade deletes). Tests are organized into classes by operation type (e.g., TestCreateCV, TestUpdateCV) and leverage pytest fixtures for reusable test data including multiple user scenarios for authorization testing. The translation service uses a lightweight testing approach with mocked dependencies to avoid downloading large ML models during tests, while the frontend employs Vitest for testing and TypeScript’s compiler for type safety validation.

All tests run automatically in the CI pipeline with intelligent change detection that executes only relevant test suites when specific services are modified. The pipeline includes linting (Ruff for Python, Biome for TypeScript), type checking (mypy, tsc), unit testing with coverage reporting using pytest-cov, and build verification. Coverage reports are uploaded as artifacts for review, and branch protection rules require all tests to pass before code can be merged, ensuring code quality and preventing regressions. The test execution completes in approximately 6-7 seconds for the full backend suite, providing rapid feedback during development while maintaining comprehensive validation of application functionality and security constraints.

Monitoring, Logging, and Reliability

Since we knew we would be using Azure to deploy our app, we used Azure Application Insights for monitoring, logging, and reliability. To integrate Azure Application Insights with our app, we used the azure-monitor-opentelemetry package for logging the events in our app and exporting them to Azure Application Insights. We also created a middleware that measured and exported metrics such as latency, error rate, etc. for every request without having to modify the code for every single function. Finally, we created a dashboard so we can easily view these metrics and verify the reliability of our system.

Azure Infrastructure

For our infrastructure, we used the Azure resources provided to us by the university. First, we created an Azure Container Registry which was crucial because other infrastructure components pull images from this registry for continuous deployment. Then, we created two Azure Web Services: one for our backend service and one for our frontend service. They pull their corresponding container image from the ACR and create a container instance. For our database, we created an Azure Database for PostgreSQL’s flexible server and connected it to our backend App Service. We also used Azure Blob Storage for affordable object storage which we used for storing the generated CV PDFs that our users create. Finally, we used Azure Application Insights for monitoring, logging, and reliability and created a dashboard that helped us monitor and verify the performance of our system. We duplicated this setup to create a staging environment for verifying performance before applying any changes to our production environment.

CI/CD Pipeline Architecture

We have decided to use a two pipeline approach, using GitHub Actions to create two separate, but connected workflows:

  1. Continuous Integration (CI) Pipeline: Validates code quality, correctness, and security through automated testing and static analysis
  2. Continour Development (CD) Pipeline: Builds containerised artifacts and orchestrates their deployment across staging and production environments.

This separation allows us to ensure that only high quality code that adheres to security standards progresses to deployment, and allows us to continuously test development code without forcing it to deploy.

Key Features

The pipeline is built with several key features in mind:

Separation of concerns
The CI and CD pipelines have their own distinct responsibilities as mentioned above. This allows us to receive fast feedback on code quality, for example the CI pipeline runs when a branch has a PR, while keeping deployment separate.

Shift-left security
Security validation is integrated into every stage of the pipeline: pre-commit hooks checking for secrets, CI static security checks and dynamic security checks after deployment to the staging branch in the CD pipeline. This layered approach allows us to catch security vulnerabilities at different points in development, and ensures that vulnerable code never reaches production.

Monorepo Optimisations
To optimise CI runtime and resource usage, our pipeline intelligently detects where changes are made and only runs the corresponding jobs. If only backend changes were made when developing a new feature, no frontend jobs will be ran (linting, testing, etc.). Global security scans are always ran regardless.

Staged Deployment Strategy
Before deploying to production, every deployment is first done in the staging environment (an identical replica of production). The deployment environment allows us to conduct extensive dynamic security checks as well as QA tests before deployment to production, which requires manual approval.

Pipeline Flow Architecture and Trunk Based Development

The pipeline is designed with modern trunk-based development principles in mind. During the development stage a developer will commit code to a feature branch (here pre-commit hooks are used) and open a pull request. Once a pull request is opened the CI pipeline will run on any pushes to the feature branch, allowing the developer to iterate on received feedback until the pipeline passes.

Once the developer is done and the pull request is approved, the branch is merged with main, at which point the CI reruns to validate and upon completion the CD pipeline is executed. The code is not immediately deployed to production, but rather is deployed to staging first, until production deployment is manually approved. This allows us to use the main branch as a single “source-of-truth”, while still deploying only functional and secure code to production.

CI Pipeline

The CI pipeline validates code quality, correctness and security before code is merged into the main branch.

Trigger Strategy and Concurrency Control
The CI pipeline is triggered on all pushes and pull requests targeting the main branch. We also implemented intelligence concurrency controls to cancel in progress runs for the same branch when a new commit is pushed, to reduce wasting compute.

Intelligent Change Detection and Conditional Execution
We use dorny/paths-filter to detect which parts of the codebase changed and run jobs accordingly. This means that we only run backend jobs, when backend changes are made. The same applies for frontend jobs, while some global jobs are always executed.

Backend Quality Gates
The backend validation pipeline runs six sequential jobs, when backed changes are detected:

  1. The environment setup job installs dependencies using uv, with lockfile based caching. Uploading the virtual environment, which means followup jobs don’t need to reinstall dependencies.
  2. Code quality and type safety, uses ruff for linting and mypy for static type checking.
  3. Unit tests run with coverage reporting, which is uploaded as an artifact. (using pytest)
  4. Build check ensures the package can be built and installed. (using uv build)
  5. Security scanning is done using pip-audit for dependency vulnerabilities and Semgrep SAST scanning is done to detect vulnerabilities and secrets, with results being uploaded using SARIF for Github Security tab integration.

Frontend Quality Gates
The frontend validation pipeline runs four jobs, when frontend changes are detected:

  1. Frontend jobs use Bun for dependency management and execution, again with dependency caching using GitHub Actions cache.
  2. Biome is used for formatting and linting, while the typescript compiler is used for type checking.
  3. During the build job Vitest testing is run, and Vite is used to build the production bundle.
  4. Security scanning is done using npm audit to detect dependency vulnerabilities and again Semgrep is used for SAST, with the results being uploaded using SARIF.

Global Security Controls and Aggregation
Secret scanning runs on every commit using Gitleaks, to ensure secrets and credentials are never exposed. This global job also always runs regardless of what changes were made. We also use an aggregation job at the end of the pipeline: ci-success,, which acts as a single status check for whether the pipeline was successful and allows us to setup GitHub branch protection rules.

Optimisations
In the pipeline we implement several advanced optimizations including:

  • Dependency caching - prevents reinstalling of dependencies, which wastes time and compute.
  • Artifact reuse - backend virtual environment is built once and then reused
  • Parallel execution - Backend and frontend jobs run simultaneously
  • Conditional execution - Only affected components are validated

These improvements have made our pipeline more modular and improved time and resource utilization.

CD Pipeline

The CD pipeline builds containerised artifacts and orchestrates deployment to staging and later production.

Trigger Mechanism and Safety Gates
The CD pipeline triggers automatically when the CI pipeline runs successfully on the main branch (when a PR is merged), or using a manual trigger. This ensures only validated code is deployed and allows us to do emergency deployments, rollbacks, etc. The initial check-ci job is used for making sure the CI passes, or that manual deployment was triggered.

Stage 1: Container Image Building and Tagging Strategy
This stage generates environment specific container images and pushes them to the Azure Container Registry, with semantic tags. The frontend has two builds: a staging image tagged with :staging and a production image tagged with :latest. Each image also holds the corresponding environments backend API URL. In addition to the environment tags, the images also receive SHA-based immutable tags (derived from commit SHA), which are used for version tracking and rollbacks. Finally, to improve build times we use DockerBuildKit with layer caching, by using unchanged layers. In the end this stage ensures that each environment receives correctly configured artifacts.

Stage 2: Staging Deployment and Health Verification
For staging deployment we follow a GitOps pull-based model, where the Azure App Service (in this case the staging resources), continually monitors the ACR for changes to the :staging tag. When a new frontend or backend image is pushed to the ACR, the app service automatically detects, pulls and then deploys it to the container. This ensures zero-downtime deployment and means that the pipeline does not have to run any explicit deployment commands as these are handled by Azure. After the deployment completes, the pipeline runs smoke tests (health checks, db connection checks, etc.) to ensure the deployment was successful and to catch any errors early.

Stage 3: Dynamic Application Security Testing
Once staging deployment is complete and validated, we run dynamic security tests in our production environment. This is done by executing OWASP ZAP baseline scans against both the backend and frontend. The backend scan targets the FastApi endpoints, testing things such as authentication mechanism, security headers, etc. While the frontend scan examines the react application for cross site scripting, cookie security etc. We use a custom rules and configuration file (.zap/rusel.tsv) to define the severity levels for different vulnerabilities. Once the scans are complete the security reports are uploaded as artifacts, to be viewed for review.

Stage 4: Production Deployment with Manual Approval Gate
Production deployments follow the same GitOps model as staging deployment, except that the service apps monitor for images with the :latest tags, but require explicit approval. The promotion-production job uses Github’s environment protection rules to pause the pipeline and wait for manual approval. Once approval is given, the pipeline builds the production images with the appropriate tags and pushes them to the ACR, where Azure handles deployment to the Service Apps.

Stage 5: Production Deployment Verification
At this stage smoke tests and health checks are performed on the production environment, in order to confirm a successful rollout. Once verification is complete, a deployment summary containing key information (deployed URLs, commit SHAs, DAST Security Scans, etc) , is published to the GitHub workflow summary.

Optimisations
There are several key optimisation implemented within the CD pipeline:

  • Docker BuildKit registry caching - reduces image build time
  • GitOps Approach - eliminates complex deployment scripts and credential management
  • Immutable SHA-tagged images - Allow for easy and precise rollback using previous versions in the ACR

Conclusion

In conclusion, we built a fully-functional product that allows users to create, manage, export, and share their CVs. We built advanced features such as AI-enabled enhancing of the resume, AI-enabled feedback insights, and a one-click resume translation service. We used programming best practices and design patterns to ensure the app would be easy to maintain and extend. We created a comprehensive test suite that let us verify that the app was working as expected. We also integrated monitoring with Azure Application Insights. We made use of many Azure components: App Services, Flexible PostgreSQL database servers, Application Insights, and Blob Storage. We created a fully automated CI/CD pipeline for automatic integration and deployment of our code which ensured high-quality code standards and security. We went above and beyond by creating a staging environment and pushing changes to production only when they work as expected in the staging environment. We provided extensive documentation: comprehensive README with usage and setup instructions and architecture diagrams, Swagger UI API reference, and self-documenting commented code. For organizing the project, we followed SCRUM methodology, using Github Projects as our Kanban board and providing a comprehensive document regarding sprints and retrospectives throughout the duration of the project.