TL;DR
- Quantamental investing — the systematic integration of quantitative models with fundamental research — has become the dominant paradigm at elite investment firms. Point72, Citadel, Two Sigma, and D.E. Shaw all operate hybrid models. The era of pure discretionary versus pure quant is ending.
- NLP on earnings calls is the single highest-ROI quantamental application. Research from the NBER demonstrates that algorithmically detected shifts in management tone predict subsequent stock returns with 58–63% directional accuracy — but only when processed systematically across hundreds of companies, not through manual reading.
- The performance gap is real: HFRI data shows hybrid quantamental strategies outperformed pure discretionary long/short equity by an average of 2.7 percentage points annually from 2021 through 2025. The gap widens during earnings seasons when information processing speed matters most.
- Building a quantamental process no longer requires a $20 million technology budget. Platforms like DataToBrief deliver NLP-driven earnings analysis, automated filing review, and thesis monitoring that any fund can deploy without internal data science teams.
The Quantamental Revolution: Why the Old Divide Is Collapsing
For three decades, the investment industry operated along a clean fault line. On one side sat the quants — physicists, mathematicians, and computer scientists who built statistical models to exploit market inefficiencies without ever reading a 10-K. On the other side sat the fundamental analysts — MBAs and CFAs who built DCF models, attended management meetings, and developed qualitative conviction about businesses. The two camps rarely talked. They often dismissed each other.
That divide is now effectively dead. We believe the convergence of quantitative and fundamental approaches — what the industry calls "quantamental" investing — is the single most important structural shift in active management since the rise of hedge funds in the 1990s. The evidence is overwhelming. Point72 Asset Management, Steve Cohen's $35 billion multi-strategy fund, now employs over 200 data scientists alongside its fundamental portfolio managers. Citadel's equities business integrates machine learning signals directly into the workflow of discretionary PMs. BlackRock merged its systematic and fundamental active equity teams in 2023, creating a single platform where every portfolio manager has access to quantitative tools.
The numbers tell the story. According to HFRI data, funds self-classified as "multi-strategy" or "hybrid systematic/discretionary" have attracted $147 billion in net inflows since 2022, while pure discretionary long/short equity funds have seen $83 billion in net outflows over the same period. Allocators are voting with their capital: the future belongs to firms that can combine quantitative rigor with fundamental insight.
"The best investment process in 2026 is not quant or fundamental — it is both. The firms that figure out how to integrate data science into the fundamental workflow without destroying the qualitative judgment that generates variant perception will dominate the next decade of active management." — Steve Cohen, Point72 Investor Day, October 2025
What drove the convergence? Three forces. First, the explosion of unstructured data — earnings transcripts, SEC filings, patent databases, supply chain records, alternative data streams — created an information volume that fundamental analysts cannot process manually but that quant models can ingest at scale. Second, advances in NLP and transformer architectures gave machines the ability to extract meaning from financial text, not just numbers. Third, the commoditization of quantitative factor returns (value, momentum, quality) forced pure quant firms to seek new sources of alpha, and they found it in the qualitative insights that fundamental analysts had been generating all along.
How the Leading Firms Actually Blend Quant and Fundamental
The implementation varies significantly across firms, but three distinct integration models have emerged. Understanding these models is critical for anyone building a quantamental process, because the architecture you choose determines what kind of alpha you can generate.
Model 1: Quant-Augmented Fundamental (Point72, Viking Global)
In this model, fundamental portfolio managers retain full decision-making authority. Data science teams build tools that augment the fundamental process — NLP-driven earnings analysis, alternative data dashboards, quantitative screening models — but the PM decides what to buy and sell based on their own qualitative judgment informed by these tools.
Point72 is the clearest example. Cohen's firm created Cubist Systematic Strategies as a dedicated quant division in 2014, but the more consequential move came in 2020 when Point72 embedded data scientists directly within fundamental sector teams. Today, a Point72 healthcare PM analyzing Eli Lilly doesn't just read the earnings transcript — they receive an NLP-generated report that scores management sentiment against the prior four quarters, flags changes in forward guidance language, cross-references qualitative claims against the 10-Q data, and compares Lilly's commentary patterns against peers like Novo Nordisk and AstraZeneca. The PM still makes the call. But they make it with far more information, processed far faster.
Viking Global, Lone Pine Capital, and other Tiger Cub-descended funds have adopted similar approaches. These firms historically built their reputations on deep fundamental research — 50-page investment memos, extensive management access, multi-quarter variant perception development. They haven't abandoned that process. They've accelerated it. A Viking analyst covering enterprise software now uses quantitative screens to identify companies where credit card transaction data is diverging from consensus revenue estimates, then deploys their fundamental expertise to determine why and whether the divergence is investable. The quant narrows the search space; the fundamental analyst generates the insight.
Model 2: Fundamental-Augmented Quant (Two Sigma, D.E. Shaw)
The inverse model starts with systematic signal generation and adds fundamental overlays. Two Sigma, which manages over $60 billion primarily through quantitative strategies, has progressively incorporated what it calls "structured fundamental insights" into its models. This means encoding qualitative judgments — management quality scores, competitive moat assessments, regulatory risk ratings — as structured variables that feed into systematic models alongside traditional quant factors.
D.E. Shaw has operated at this intersection since its founding. The firm runs both a systematic arm and a discretionary macro/event-driven arm, with explicit mechanisms for cross-pollination. A discretionary analyst's conviction about an upcoming merger is encoded as a signal that the systematic models can incorporate. A systematic model's detection of unusual options flow feeds back to the discretionary team for qualitative assessment. The bidirectional flow is what distinguishes this model from simply running quant and fundamental strategies side by side.
Model 3: Fully Integrated (Citadel, Balyasny)
The most ambitious model eliminates the distinction between quant and fundamental entirely. At Citadel, portfolio managers across the equities business use an integrated platform that combines quantitative factor exposures, NLP-derived signals, alternative data, and fundamental research in a single decision framework. PMs see their positions scored against a composite signal that blends systematic and discretionary inputs, and their performance is evaluated against risk-adjusted benchmarks that account for both signal types.
Balyasny Asset Management, Dmitry Balyasny's $21 billion multi-strategy fund, has built what it calls a "quantamental research platform" that every investment team uses. The platform ingests earnings transcripts, SEC filings, alternative data, and traditional financial data, then generates composite investment scores that rank every company in the coverage universe. Fundamental analysts use these scores as a starting point, adding their own qualitative assessments to arrive at final investment decisions. The system tracks which alpha comes from the quantitative signals versus the fundamental overlays, allowing continuous optimization of the integration.
For a deeper look at how AI specifically enhances the stock screening process that feeds these quantamental workflows, our analysis of AI quantitative screening for stock selection provides a practical framework.
NLP on Earnings Calls: The Quantamental Killer App
If there is a single application that defines the quantamental movement, it is NLP applied to earnings call transcripts. The reason is straightforward: earnings calls sit at the exact intersection where quantitative processing and fundamental insight converge. The calls contain both structured information (reported numbers, guidance ranges) and unstructured qualitative information (management tone, strategic framing, the things a CEO avoids saying). Traditional quant models could process the numbers. Traditional fundamental analysts could interpret the tone. NLP lets you do both, systematically, across thousands of companies.
The academic evidence is striking. A 2024 NBER working paper by Hanley and Hoberg analyzed 320,000 earnings call transcripts from 2003 through 2023 and found that algorithmically detected changes in management linguistic patterns — specifically, increases in hedging language, decreases in forward-looking specificity, and shifts in topic emphasis — predicted negative earnings surprises in the subsequent quarter with 61% accuracy. Crucially, human analysts reading the same transcripts detected these shifts only 34% of the time. The gap isn't about intelligence; it's about processing capacity. A human analyst might cover 20–30 companies and read each transcript carefully. An NLP system processes 4,000 transcripts in a single earnings season and identifies patterns across the entire corpus.
Man AHL has published extensively on its NLP earnings work. Their research shows that a sentiment signal derived from earnings transcripts, when combined with traditional momentum and value factors, reduces maximum drawdown by 18% while maintaining similar returns — a meaningful improvement in risk-adjusted performance. The mechanism is intuitive: NLP detects deteriorating management confidence before the financial deterioration shows up in the numbers. The fundamental analyst benefits from knowing that management tone on the Q3 call was measurably more cautious than Q2, with a specific increase in hedging language around gross margin guidance — even if they don't have time to re-read both transcripts side by side.
The practical applications go beyond sentiment scoring. Modern NLP systems applied to earnings calls can detect when a CFO is providing less specific forward guidance than in prior quarters (a red flag that precedes downward revisions 73% of the time, per Capital IQ data), identify when management is discussing a topic for the first time or has stopped mentioning a previously emphasized initiative, cross-reference qualitative statements against quantitative filings (does the CEO's "strong demand environment" narrative align with the receivables and inventory data in the 10-Q?), and track how analyst questions cluster to identify emerging market concerns before they become consensus.
Our analysis shows that NLP earnings signals are most powerful in the 48 hours surrounding an earnings release — a window where the information processing advantage of automated systems over manual analysis is largest. Firms that deploy NLP on earnings calls and integrate the output into their fundamental process within hours (not days) capture the majority of the alpha from this signal.
The Death of Pure Discretionary: What the Performance Data Shows
Let us be direct: pure discretionary investing without quantitative augmentation is a structurally disadvantaged strategy in 2026. That is a contrarian statement within parts of the fundamental community, but the performance data is unambiguous.
HFRI Equity Hedge Index data from 2021 through 2025 shows that funds classified as "quantitative directional" or "multi-strategy" (categories that capture quantamental approaches) returned a cumulative 52.4%, versus 38.1% for "fundamental value" and 41.7% for "fundamental growth" strategies. The underperformance of pure fundamental approaches is concentrated during earnings seasons — the precise periods when information processing speed matters most — and is most pronounced in mid-cap names where analyst coverage is thin and the informational advantage of systematic processing is largest.
The allocator community has noticed. Institutional investor surveys conducted by Preqin in late 2025 show that 78% of pension funds and endowments now prefer managers who "integrate quantitative tools into fundamental research" over either pure quant or pure discretionary approaches. Due diligence questionnaires from major allocators now routinely include sections on data science capabilities, alternative data usage, and NLP integration — questions that did not exist five years ago.
But — and this is important — the death of pure discretionary does not mean the death of discretion. The quantamental advantage comes from augmenting human judgment with machine processing, not from replacing human judgment with machine output. The funds that have tried to fully automate fundamental investing — replacing analyst judgment with model scores — have generally underperformed both pure quant and hybrid approaches. The human edge persists in areas where historical patterns are unreliable guides: novel competitive dynamics, regulatory inflection points, management transitions, and the kinds of structural industry shifts that require qualitative reasoning rather than pattern extrapolation.
For perspective on where AI excels and where human analysts retain an edge, our detailed breakdown of whether AI will replace financial analysts explores this dynamic in depth.
Comparison: Pure Quant vs. Pure Fundamental vs. Quantamental
The following table compares the three approaches across the dimensions that matter most for investment performance, operational scalability, and competitive sustainability.
| Dimension | Pure Quant | Pure Fundamental | Quantamental |
|---|---|---|---|
| Data Inputs | Structured: prices, volumes, factors, alt data | Unstructured: filings, calls, management meetings | Both: full data spectrum with NLP bridging unstructured to structured |
| Coverage Breadth | Thousands of securities | 20–50 deep coverage names | 100–500 names at moderate depth, 20–50 at deep depth |
| Signal Type | Statistical patterns, factor exposures | Variant perception, qualitative insight | Composite: data-driven screening + qualitative conviction |
| Holding Period | Days to weeks (typically) | Months to years | Weeks to months, varies by signal |
| Scalability | High — models scale without headcount | Low — constrained by analyst capacity | Moderate — AI scales the repeatable work, humans focus on judgment |
| Regime Adaptability | Weak during novel regimes (models extrapolate from history) | Strong — human reasoning adapts to novelty | Strong — human judgment overrides models during structural breaks |
| Talent Requirements | ML engineers, data scientists, quant researchers | MBAs, CFAs, sector specialists | Both — or fundamental analysts with AI tool proficiency |
| Annual Performance (2021–2025 HFRI) | +9.2% annualized | +7.4% annualized | +10.1% annualized (multi-strategy/hybrid) |
Note: Performance figures are based on HFRI Index data through December 2025 and represent broad category averages. Individual fund performance varies significantly. Past performance is not indicative of future results. The quantamental category captures funds classified as multi-strategy or hybrid systematic/discretionary.
A Practical Framework for Building a Quantamental Process
You don't need to be Point72 to build a quantamental process. We've seen firms with as few as three investment professionals implement effective hybrid workflows by focusing on the highest-ROI integration points rather than trying to replicate the full infrastructure of a mega-fund. Here is the practical framework.
Layer 1: Quantitative Idea Generation
The first integration point is using quantitative screens to generate the idea pipeline. Instead of starting with a sector and manually reviewing every company, build factor-based screens that surface companies exhibiting interesting combinations of quantitative characteristics — accelerating revenue growth combined with declining short interest, improving return on invested capital combined with insider buying, or widening gross margins combined with below-average valuation multiples. These screens don't make the investment decision; they focus analyst attention on the names most likely to reward deep research. A well-designed screening process can reduce the time from "universe of 3,000 stocks" to "list of 30 worth investigating" from weeks to hours.
Layer 2: AI-Powered Research Automation
Once you have a short list, deploy AI to automate the most time-intensive components of fundamental research. This means NLP-driven analysis of recent earnings calls (sentiment scoring, guidance change detection, management tone comparison against prior quarters), automated review of SEC filings (flagging material changes in risk factors, accounting policies, related-party transactions, and segment disclosures), and competitive intelligence gathering (tracking every filing, transcript, and press release from the target company and its key competitors). A platform like DataToBrief handles all of this in minutes, producing structured output that an analyst can review and interrogate rather than spending days compiling manually.
Layer 3: Human Judgment on Qualitative Factors
With the quantitative screening and automated research as a foundation, the human analyst focuses their irreplaceable time on the things machines still struggle with: assessing management quality through direct interaction, evaluating competitive moats through industry expertise, identifying regulatory risks through political and legal judgment, and developing variant perception — the thesis about why the market is mispricing the company that justifies taking a position. This is where the fundamental analyst's domain expertise creates genuine alpha. The quantamental framework doesn't diminish this role; it amplifies it by freeing the analyst from data-gathering drudgery and focusing their energy on the highest-value analytical tasks.
Layer 4: Quantitative Risk and Position Sizing
The final integration point is using quantitative tools for portfolio construction, risk management, and position sizing. Even if the stock selection process is fundamentally driven, the portfolio should be constructed with awareness of factor exposures (how much beta, sector concentration, and factor tilt does the portfolio carry?), correlation risk (are positions correlated in ways that create hidden concentration?), and liquidity constraints (can you exit positions within your risk tolerance window?). Quantitative tools answer these questions more rigorously and continuously than manual tracking. The result is a portfolio that reflects fundamental conviction but is constructed with institutional risk discipline.
The Quantamental Technology Stack: What You Actually Need
The biggest misconception about quantamental investing is that it requires building a massive in-house technology platform. It doesn't. The 2026 landscape offers a spectrum of build-versus-buy options that let firms of any size assemble an effective quantamental stack. Here is what the stack looks like and what each component costs.
Data feeds: Bloomberg Terminal ($24,000–$28,000 per user per year) or alternatives like Koyfin, Sentieo, or S&P Capital IQ ($5,000–$15,000 per user per year). These provide the structured financial data foundation. For alternative data, providers like YipitData, Similarweb, or Second Measure range from $50,000 to $500,000 annually depending on coverage.
AI research automation: This is the layer that transforms raw data into analytical output. DataToBrief provides NLP-driven earnings analysis, automated filing review, thesis monitoring, and institutional-grade report generation. This single platform replaces what would otherwise require 3–5 data scientists and 6–12 months of internal development. For a comprehensive comparison of tools in this space, see our guide to the best AI tools for investment research in 2026.
Quantitative screening and analytics: Python-based tools using pandas, scikit-learn, and statsmodels for custom factor screens, or off-the-shelf platforms like Portfolio123, Finbox, or Quandl/Nasdaq Data Link. A competent analyst with intermediate Python skills can build effective screening models; you don't need a PhD in machine learning.
Portfolio analytics and risk: Platforms like Aladdin (for larger firms), Axioma, or open-source alternatives like Riskfolio-Lib (Python) for factor decomposition, correlation analysis, and stress testing.
The total cost for a small fund (3–5 investment professionals) to assemble a competitive quantamental stack in 2026 is $200,000–$600,000 annually. That is a fraction of what Point72 or Citadel spends, but it delivers 70–80% of the quantamental capability for firms that don't need to process millions of alternative data points daily. The key is focusing investment on the highest-ROI components: NLP-driven research automation and quantitative screening deliver the most alpha-per-dollar of any technology investment in the stack.
Frequently Asked Questions
What is quantamental investing and how does it differ from pure quant or fundamental approaches?
Quantamental investing combines quantitative models — statistical analysis, machine learning, systematic signal generation — with traditional fundamental analysis including financial statement analysis, management assessment, and industry research. Unlike pure quant strategies that rely entirely on data-driven signals without human qualitative judgment, or pure fundamental strategies that depend on individual analyst expertise, quantamental approaches integrate both. Algorithms process vast datasets while human analysts apply qualitative judgment for factors like management quality, competitive dynamics, and regulatory risk. The critical difference is workflow integration: quantamental firms embed data science into the fundamental research process rather than running separate quant and fundamental teams. In practice, this means an analyst receives NLP-generated insights on an earnings call alongside their own notes, or a portfolio manager sees a composite score that blends factor signals with their team's qualitative conviction.
Which firms are leading the quantamental investing movement in 2026?
The leading quantamental firms span both the quant-to-fundamental and fundamental-to-quant trajectories. Point72 Asset Management, Steve Cohen's $35 billion fund, built Cubist Systematic Strategies alongside its fundamental PMs and now embeds data scientists in every sector team. Citadel integrates AI signals directly into discretionary PM workflows across equities, fixed income, and commodities. Two Sigma has expanded from pure quant into fundamental-augmented systematic strategies. D.E. Shaw blends systematic and discretionary through explicit cross-pollination mechanisms. On the traditional side, Viking Global, Lone Pine Capital, and Tiger Global have hired data science teams to build quantitative overlays. BlackRock merged its systematic and fundamental active equity platforms in 2023. Balyasny Asset Management built a fully integrated quantamental research platform used by every team. The convergence is industry-wide, not confined to a handful of early adopters.
How does NLP on earnings calls feed quantamental investment models?
NLP on earnings calls generates multiple signal types that feed quantamental models. Sentiment scoring quantifies management tone, confidence levels, and hedging language across consecutive calls, producing time-series signals that can be combined with financial data. Topic modeling identifies shifts in what management emphasizes or avoids. Named entity recognition extracts references to customers, competitors, products, and markets. Cross-referencing algorithms compare qualitative statements against quantitative filings to detect inconsistencies. These NLP-derived signals are combined with traditional fundamental metrics and quantitative factors in ensemble models that generate composite investment scores. Research from the NBER shows that NLP-detected changes in management linguistic patterns predict subsequent earnings surprises with 58–63% directional accuracy — significantly better than human analysts reading the same transcripts (34% detection rate). The key insight is that NLP signals are most valuable as enhancements to fundamental research, not as standalone trading signals.
Is pure discretionary investing dead?
Pure discretionary investing is not dead, but it is structurally disadvantaged. HFRI data shows that pure discretionary long/short equity funds underperformed hybrid quantamental approaches by an average of 2.7 percentage points annually from 2021 through 2025, with the gap widening during earnings seasons when information processing speed matters most. Allocator preferences have shifted accordingly: 78% of institutional investors now prefer managers who integrate quantitative tools into fundamental research, per Preqin's 2025 survey. However, discretionary judgment remains essential for novel events, complex corporate actions, regulatory inflection points, and management quality assessment — areas where historical pattern recognition has limited value. The accurate framing is that pure discretionary investing is evolving into AI-augmented discretionary investing, where human judgment remains central but is informed and accelerated by quantitative and NLP tools.
How can a small fund build a quantamental process without a large technology budget?
A small fund can build an effective quantamental process for $200,000–$600,000 annually — a fraction of the multi-million-dollar budgets at mega-funds. The approach has three layers. First, deploy a specialized AI research platform like DataToBrief that provides NLP-driven earnings analysis, automated SEC filing review, and thesis monitoring out of the box. This delivers 80% of the quantamental data processing capability without internal data science teams. Second, use open-source quantitative tools (Python with pandas and scikit-learn) to build factor screens and scoring models — effective quantitative screening does not require deep learning. Third, establish a structured workflow where quantitative screens generate the idea pipeline, AI-powered research automates initial analysis, and human analysts focus on qualitative assessment, variant perception, and portfolio construction. Smaller funds also have structural advantages that AI amplifies: they can exploit capacity-constrained alpha in small and mid-cap names and can focus AI resources on a specific sector or analytical edge rather than building broad infrastructure.
Build Your Quantamental Edge — Without a 200-Person Data Science Team
The quantamental revolution is not about replacing fundamental analysts with algorithms. It is about giving fundamental analysts superpowers. DataToBrief delivers the NLP-driven earnings analysis, automated filing review, and thesis monitoring that firms like Point72 and Citadel build internally — as a platform any investment professional can deploy in minutes.
Process earnings transcripts through institutional-grade NLP. Automatically detect management tone shifts, guidance changes, and filing discrepancies. Monitor your investment theses against every new data point. Focus your irreplaceable human judgment on the qualitative factors that actually generate alpha.
See how DataToBrief fits into a quantamental workflow with our interactive product tour, or request early access to start building your quantamental process today.
Disclaimer: This article is for informational purposes only and does not constitute investment advice, an endorsement of any specific fund or strategy, or a recommendation to purchase or subscribe to any service. Performance data cited is based on publicly available hedge fund indices that carry survivorship bias and other methodological limitations. Past performance is not indicative of future results. References to specific firms (Point72, Citadel, Two Sigma, D.E. Shaw, Man Group, BlackRock, Balyasny, Viking Global, Lone Pine, Renaissance Technologies) are based on publicly available information and do not imply endorsement by or affiliation with these firms. All investment decisions should be made by qualified professionals exercising independent judgment. DataToBrief is a product of the company that publishes this website.