By Sally A. Vazquez-Castellanos, Esq.
Published on January 16, 2026 at 7:40 am.
My morning question with generative AI was this:
Talk about a spinning ballerina. What happens when workers are told to “skill up,” while access to work is controlled by opaque systems—often on our smartphones—that can exclude applicants without explanation or accountability?
The contemporary employment marketplace is increasingly intermediated by technology. Job seekers prepare résumés, tailor applications, and often pay for optimization tools, premium visibility, or “AI-compatible” formatting. Applications are submitted through online portals that promise efficiency and matching. Responses, when they arrive, are typically automated acknowledgments.
What remains largely unseen is how many applications are filtered, ranked, or excluded by Applicant Tracking Systems (ATS) and automated screening tools before any human review occurs—and how the business incentives of some intermediaries intersect with that process. This architecture raises serious questions about transparency, accountability, and the integrity of access to work.
Automation as the First Gatekeeper
Artificial intelligence now governs the earliest—and often decisive—stage of employment decision-making. ATS platforms evaluate résumés using keyword matching, scoring models, and pattern recognition trained on historical hiring data.
While designed to manage volume and reduce costs, these systems can:
Exclude qualified candidates based on proxies unrelated to job performance. Replicate historical hiring patterns, including past inequities. Deprioritize nontraditional career paths, employment gaps, or lateral transitions. Operate without meaningful explanation, auditability, or review mechanisms.
As a result, access to employment is increasingly mediated by systems that are nontransparent by design, frequently accessed through smartphone interfaces that conceal how decisions are made.
Algorithmic Bias and Structural Exclusion
A substantial body of research shows that automated hiring tools may perpetuate racial, gender, age, and socioeconomic bias, particularly when trained on data reflecting historic disparities. Unlike traditional discrimination, algorithmic exclusion can occur without individualized decision-making or observable intent.
When screening systems intentionally—or recklessly—exclude workers through opaque criteria, the consequences are not abstract. Applicants are denied consideration without recourse, and entire groups may be disproportionately filtered out. These dynamics raise civil rights concerns where protected classes are affected and governance concerns where systems are not explainable or auditable.
The “Skill Up” Imperative—Placed in Context
Public and private responses to technological disruption frequently emphasize reskilling. Initiatives such as “Skill Up L.A.” represent important efforts to align workforce training with emerging labor-market demands and to treat transition as a shared responsibility.
But training alone does not guarantee access to employment if screening mechanisms remain opaque, biased, or misaligned with actual hiring authority. Credentials cannot overcome barriers when automated systems prevent meaningful consideration. For reskilling to succeed, access pathways must be as functional and accountable as training pathways—or they become a Public Burden.
Platform Economics and the Risk of Misrepresentation
Another emerging concern is the commercialization of job seeking. Many platforms offer paid services promising increased visibility or improved ranking, often without control over whether a position is current, genuinely open, or subject to competitive review.
Where intermediaries generate revenue from repeated applications that do not result in interviews or employment, legitimate questions arise about:
The authenticity and currency of job listings The alignment between platform incentives and placement outcomes The use, retention, and potential repurposing of applicant data.
In other markets, charging consumers to pursue opportunities that are unavailable raises concerns about misrepresentation. In employment, such dynamics suggests heightened scrutiny because they affect livelihoods and families who could be vulnerable as a result of discriminatory policies and practices.
A Comparative View: Where Protections Exist
The entertainment industry offers a contrasting example. In response to AI-related risks, SAG-AFTRA and related unions negotiated protections addressing:
-Digital replicas and synthetic performances;
-Consent and compensation for AI use of likeness and voice Disclosure when AI is deployed;
-Limits on substituting automated outputs for credited human labor.
These safeguards exist because workers in those sectors possess institutional leverage and negotiated rights. Most workers navigating automated hiring platforms do not.
An Emphatic Position: Re-Centering Human Judgment
Given the documented risks of bias, opacity, and exclusion in ATS-mediated hiring—and the possibility that systems may intentionally or recklessly screen out workers—our economy may need to seriously consider opening up companies dedicated to placing people in jobs the old-fashioned way.
That position is not nostalgic; it is pragmatic. A human-centered placement model would:
Verify that positions are real, current, and genuinely open. Review candidates through contextual, human judgment. Align fees with successful placements rather than application volume. Provide employers vetted talent and applicants fair consideration. Maintain transparent criteria and accountability for outcomes, including laws and regulations to ensure accountability.
In an economy saturated with automation, restoring human accountability is not a rejection of technology; it is a corrective where technology undermines fairness.
Beyond Retraining: Governance, Accountability, and Access
Historical experience shows retraining works when:
Jobs exist and are accessible. Hiring processes are fair, explainable, and reviewable. Workers can sustain transitions financially. New roles provide stability and progression.
Artificial intelligence challenges each assumption. Without oversight of automated hiring systems and the platforms that mediate them, retraining risks becoming an individual obligation imposed within structurally constrained systems.
This reality invites a broader inquiry: Who is responsible for the design choices embedded in the platforms we rely on? What duties do developers, platform operators, and employers owe to applicants whose livelihoods depend on fair access?
Conclusion
Artificial intelligence will continue to reshape labor markets. The central question is not whether workers should adapt, but whether access to work will be governed with integrity.
Reskilling without access reform risks inefficiency. Automation without oversight risks exclusion. Where ATS systems intentionally or recklessly exclude workers—and where platforms profit from opaque processes—the case for human-centered alternatives becomes compelling. We could spend thousands on skilling up to support technology upgrades but the fundamental question is do these systems, technologies and machines really have America’s back?
It may be time to ask harder questions of the systems behind our smartphones—and to invest in processes that are not so dependent on the internet—placing transparency, accountability, and human judgment back at the center of employment.
Sources from Generative AI
Brookings Institution. AI, Labor Displacement, and the Limits of Worker Retraining. Brookings Institution,
Goldman Sachs Global Investment Research. The Potentially Large Effects of Artificial Intelligence on Economic Growth. Goldman Sachs,
Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, 2018.
SAG-AFTRA. Artificial Intelligence and Digital Replica Protections. SAG-AFTRA,
OnLabor. How Trade Displacement History Can Prepare Us for AI Displacement Now. Harvard Law School,
U.S. Bureau of Labor Statistics. Employment Situation Summary. U.S. Department of Labor,
U.S. Equal Employment Opportunity Commission. Artificial Intelligence and Algorithmic Fairness. EEOC,
Legal Disclaimer
This article is provided for general informational and educational purposes only. It does not constitute legal advice, does not create an attorney-client relationship, and should not be relied upon as a substitute for consultation with qualified legal, labor, or policy professionals. Laws, labor agreements, and economic conditions vary by jurisdiction and may change over time.
© Sally A. Vazquez-Castellanos. All rights reserved.
