Hardcoded Secrets
Hardcoded secrets are one of the most common and dangerous security vulnerabilities in modern applications. When API keys, passwords, database credentials, encryption keys, or authentication tokens are embedded directly in source code, they become easily discoverable by attackers and nearly impossible to rotate without code changes.
What Are Hardcoded Secrets?
Hardcoded secrets refer to sensitive credentials, API keys, passwords, encryption keys, and authentication tokens that are embedded directly into source code as string literals or constants. This practice occurs when developers write credentials directly in code files, configuration files that are version-controlled, or even compiled binaries. While it may seem convenient during development, hardcoding secrets creates a massive security vulnerability that can have catastrophic consequences.
The fundamental problem with hardcoded secrets is that they become permanently embedded in the codebase. Once committed to version control systems like Git, these secrets are visible in the repository history even if later removed. When source code is pushed to public repositories on platforms like GitHub, GitLab, or Bitbucket, these credentials become publicly accessible to anyone. In the case of mobile applications or desktop software, secrets can be extracted from compiled binaries through decompilation or reverse engineering.
Hardcoded secrets represent a violation of the principle of separation of concerns between code and configuration. According to OWASP, cryptographic failures (which include exposed secrets) are ranked as A02 in the 2021 OWASP Top 10. Automated scanning tools constantly crawl public repositories looking for exposed credentials, meaning that a secret pushed to a public repo can be discovered and exploited within minutes.
How It Works
During development or deployment, a developer writes an API key, database password, JWT secret, or other credential directly into the source code as a string constant. This might be in a Java class, a configuration file like application.properties, or a JavaScript file.
The file containing the secret is committed to Git and pushed to a remote repository. Even if the repository is initially private, the secret is now permanently embedded in the repository history. Changing the repository to public or granting access to new team members exposes the secret.
Attackers use automated tools to scan public repositories for patterns matching API keys, passwords, and tokens. Alternatively, if source code is leaked, decompiled from a mobile app, or exposed through a misconfigured server, the attacker gains access to the hardcoded credentials.
The attacker extracts the credentials from the source code or compiled binary. They then test the credentials against the associated service (API endpoint, database server, cloud provider) to confirm they are valid and determine the level of access they provide.
With valid credentials, the attacker can access the protected resource. This could mean reading sensitive data from a database, making expensive API calls billed to the victim, accessing cloud infrastructure, or impersonating the application to access third-party services. The impact depends on the privileges associated with the compromised credential.
Vulnerable Code Example
@Configuration
public class AppConfig {
// VULNERABLE: API keys and secrets hardcoded as constants
private static final String AWS_ACCESS_KEY = "AKIAIOSFODNN7EXAMPLE";
private static final String AWS_SECRET_KEY = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY";
private static final String JWT_SECRET = "mySecretKey123!@#";
private static final String DB_PASSWORD = "admin123";
@Bean
public DataSource dataSource() {
HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:mysql://localhost:3306/mydb");
config.setUsername("admin");
// VULNERABLE: Database password hardcoded
config.setPassword(DB_PASSWORD);
return new HikariDataSource(config);
}
@Bean
public AmazonS3 s3Client() {
// VULNERABLE: AWS credentials hardcoded
BasicAWSCredentials awsCreds = new BasicAWSCredentials(
AWS_ACCESS_KEY,
AWS_SECRET_KEY
);
return AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
}
}
// application.properties (checked into Git)
// VULNERABLE: Secrets in config files
jwt.secret=mySecretKey123!@#
stripe.api.key=sk_live_51H8xYz2eZvKYlo2C3hGZmqPxzN
sendgrid.api.key=SG.xxxxxxxxxxxxxxxxxxxxxxxxxxx
// If this code is pushed to a public repository,
// all these credentials are immediately compromised.
// Even if removed later, they remain in Git history.Secure Code Example
@Configuration
public class AppConfig {
// SECURE: Load secrets from environment variables
@Value("${AWS_ACCESS_KEY}")
private String awsAccessKey;
@Value("${AWS_SECRET_KEY}")
private String awsSecretKey;
@Value("${JWT_SECRET}")
private String jwtSecret;
@Value("${DB_PASSWORD}")
private String dbPassword;
@Bean
public DataSource dataSource() {
HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:mysql://localhost:3306/mydb");
config.setUsername("admin");
// SECURE: Password loaded from environment variable
config.setPassword(dbPassword);
return new HikariDataSource(config);
}
@Bean
public AmazonS3 s3Client() {
// SECURE: Credentials loaded from environment/IAM roles
// For production, use IAM roles instead of access keys
return AmazonS3ClientBuilder.standard()
.withCredentials(new DefaultAWSCredentialsProviderChain())
.build();
}
}
// application.properties (safe to commit)
# SECURE: Only placeholders, no actual secrets
jwt.secret=${JWT_SECRET}
stripe.api.key=${STRIPE_API_KEY}
sendgrid.api.key=${SENDGRID_API_KEY}
// .gitignore (CRITICAL: prevent .env from being committed)
.env
.env.local
application-local.properties
// .env (NEVER commit this file)
JWT_SECRET=generated-secret-from-vault
STRIPE_API_KEY=sk_live_retrieved_from_vault
AWS_ACCESS_KEY=retrieved_from_aws_secrets_manager
AWS_SECRET_KEY=retrieved_from_aws_secrets_manager
DB_PASSWORD=retrieved_from_vault
// For production: Use HashiCorp Vault, AWS Secrets Manager,
// Azure Key Vault, or Google Secret Manager.
// For development: Use .env files with .gitignore.Types of Hardcoded Secrets
Source Code Secrets
The most common form of hardcoded secrets. Developers write API keys, database passwords, JWT signing keys, encryption keys, or authentication tokens directly as string literals or constants in source code files. These might appear in Java classes, Python scripts, JavaScript files, or any other programming language. Source code secrets are easily discoverable by anyone with access to the repository, and they persist in Git history even after removal. They are the primary target of automated scanning tools that crawl public repositories.
Configuration File Secrets
Credentials stored in configuration files that are committed to version control. This includes files like application.yml, application.properties, config.json, .env files (when not properly ignored), or database connection strings in XML configuration. While configuration files are intended to separate settings from code, they often contain actual secret values instead of references to environment variables or vault systems. Configuration secrets are particularly dangerous because developers may not recognize these files as containing sensitive data, leading them to commit them to Git without hesitation.
Binary & Compiled Secrets
Secrets embedded in compiled applications, mobile app binaries (APK/IPA), Docker images, or other distributed artifacts. Even though the source code may not be publicly available, compiled binaries can be decompiled or reverse-engineered to extract hardcoded strings. Mobile applications are especially vulnerable because APK and IPA files can be easily downloaded and analyzed. Docker images pushed to public registries may contain secrets in environment variables or filesystem layers. JAR files, executables, and shared libraries can all be inspected to reveal embedded credentials, making this a critical risk for distributed software.
Impact
The exposure of hardcoded secrets can have severe and far-reaching consequences. Because these credentials often have broad access privileges, a single leaked secret can compromise an entire system or cloud infrastructure.
Exposed AWS, Azure, or Google Cloud credentials can grant attackers full access to cloud resources. This includes the ability to launch expensive compute instances (cryptomining), access sensitive data in storage buckets, modify infrastructure configurations, or delete critical resources. Cloud providers have reported instances where exposed credentials led to bills exceeding hundreds of thousands of dollars within days.
Hardcoded database credentials or API keys that access user data can lead to massive data breaches. Attackers can extract customer information, financial records, personal identifiable information (PII), or proprietary business data. The regulatory and legal consequences of such breaches (GDPR fines, lawsuits, loss of customer trust) can be catastrophic for an organization.
Leaked JWT signing keys, session secrets, or OAuth credentials allow attackers to forge authentication tokens and impersonate legitimate users or administrators. This can bypass all authentication mechanisms and grant the attacker full control over user accounts, including administrative accounts with elevated privileges.
When third-party API keys or service credentials are exposed, attackers can abuse the victim's relationship with external services. For example, a leaked SendGrid API key could be used to send phishing emails that appear to come from the victim organization. Exposed credentials for internal systems can enable lateral movement within an organization's network, allowing attackers to pivot from one compromised system to another.
Prevention Checklist
Never hardcode secrets in source code. Use environment variables for local development and dedicated secret management solutions like HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, or Google Secret Manager for production. These systems provide centralized secret storage, automatic rotation, access control, and audit logging.
Integrate automated secret scanning tools like GitGuardian, TruffleHog, detect-secrets, or GitHub's secret scanning into your development workflow and CI/CD pipeline. These tools scan commits, pull requests, and repositories for patterns matching API keys, passwords, and tokens, preventing secrets from being committed in the first place.
Always include .env, .env.local, secrets.yml, and any other files containing credentials in your .gitignore file. Educate developers about which files should never be committed. Use .env.example files with placeholder values to show required configuration without exposing actual secrets.
Configure Git pre-commit hooks using tools like Husky (for JavaScript) or pre-commit framework (for Python) to scan code for secrets before commits are created. This provides a last line of defense at the developer's workstation, catching secrets before they ever reach the remote repository.
Establish and enforce regular credential rotation policies. Secrets should have expiration dates and be automatically rotated on a schedule. Use short-lived credentials and temporary tokens whenever possible. Modern secret management systems can automate rotation, reducing the window of opportunity for compromised credentials.
Train development teams on secure credential management practices. Include secret detection as part of mandatory code review checklists. Make security awareness part of the development culture. When secrets are accidentally committed, have a clear incident response plan that includes immediate revocation, rotation, and security assessment.
Real-World Examples
Uber
Uber engineers accidentally committed AWS credentials to a private GitHub repository. Attackers gained access to this repository and used the credentials to download data on 57 million Uber riders and drivers from an Amazon S3 bucket. Rather than immediately disclosing the breach, Uber paid the hackers $100,000 to delete the data. The company later paid $148 million to settle claims from all 50 U.S. states and was fined for failing to disclose the breach.
Samsung
Samsung's software development teams accidentally exposed GitLab tokens and AWS credentials in public repositories on GitHub. The exposed credentials provided access to Samsung's internal source code repositories, potentially including proprietary code for devices and services. Security researchers discovered the credentials through automated scanning, highlighting how quickly exposed secrets can be found once committed to public repositories.
A security researcher discovered Twitter's internal API keys and authentication tokens hardcoded in a publicly accessible mobile application binary. The exposed credentials could have allowed unauthorized access to Twitter's internal APIs and backend systems. While Twitter quickly rotated the compromised credentials, the incident demonstrated that even major tech companies can fall victim to hardcoded secrets in mobile applications.
Toyota
Toyota's T-Connect customer portal exposed nearly 300,000 customer email addresses and vehicle management data for five years due to hardcoded credentials in source code. A subcontractor developing the portal had embedded an access key directly in the codebase that was then accidentally made public. The exposure went undetected from 2016 to 2021, demonstrating how hardcoded secrets can create long-term, persistent vulnerabilities.
Ready to Test Your Knowledge?
Put what you have learned into practice. Try identifying and fixing hardcoded secrets vulnerabilities in our interactive coding challenges, or explore more security guides to deepen your understanding.