{"id":12247,"library":"typescript-parsec","title":"TypeScript Parser Combinator","description":"typescript-parsec is a parser combinator library designed for TypeScript, allowing developers to construct parsers quickly with a concise API. It provides utilities for lexing (tokenizing input) and parsing (defining grammar rules) using a functional, combinatory approach. As of version 0.3.4, it is still in a pre-1.0 development phase, meaning API stability might evolve, though it has been maintained by Microsoft. The library's release cadence is not strictly defined, but updates occur periodically based on needs and contributions to the underlying Microsoft research projects. It differentiates itself by being explicitly typed for TypeScript, offering robust type inference and safety for parser definitions, making it suitable for building language front-ends directly within a TypeScript codebase. Its focus is on developer experience, providing a clear and efficient way to define complex grammars with strong type guarantees, leveraging the TypeScript ecosystem effectively.","status":"maintenance","version":"0.3.4","language":"javascript","source_language":"en","source_url":"https://github.com/microsoft/ts-parsec","tags":["javascript","typescript"],"install":[{"cmd":"npm install typescript-parsec","lang":"bash","label":"npm"},{"cmd":"yarn add typescript-parsec","lang":"bash","label":"yarn"},{"cmd":"pnpm add typescript-parsec","lang":"bash","label":"pnpm"}],"dependencies":[],"imports":[{"note":"While functional, CommonJS 'require' syntax is generally less common for this TypeScript-first library.","wrong":"const { buildLexer } = require('typescript-parsec');","symbol":"buildLexer","correct":"import { buildLexer } from 'typescript-parsec';"},{"note":"`rule` is a named export, not a default export.","wrong":"import rule from 'typescript-parsec';","symbol":"rule","correct":"import { rule } from 'typescript-parsec';"},{"note":"Most core parser combinators are directly exported from the main package path.","wrong":"import { alt, seq } from 'typescript-parsec/dist/parser';","symbol":"alt, seq, apply","correct":"import { alt, seq, apply } from 'typescript-parsec';"}],"quickstart":{"code":"import { buildLexer, expectEOF, expectSingleResult, rule, Token } from 'typescript-parsec';\nimport { alt, apply, kmid, lrec_sc, seq, str, tok } from 'typescript-parsec';\n\nenum TokenKind {\n    Number,\n    Add,\n    Sub,\n    Mul,\n    Div,\n    LParen,\n    RParen,\n    Space\n}\n\nconst lexer = buildLexer([\n    [true, /^\\d+(\\.\\d+)?/g, TokenKind.Number],\n    [true, /^\\+/g, TokenKind.Add],\n    [true, /^-/g, TokenKind.Sub],\n    [true, /^\\*/g, TokenKind.Mul],\n    [true, /^\\//g, TokenKind.Div],\n    [true, /^\\(/g, TokenKind.LParen],\n    [true, /^\\)/g, TokenKind.RParen],\n    [false, /^\\s+/g, TokenKind.Space]\n]);\n\nfunction applyNumber(value: Token<TokenKind.Number>): number {\n    return +value.text;\n}\n\nfunction applyUnary(value: [Token<TokenKind>, number]): number {\n    switch (value[0].text) {\n        case '+': return +value[1];\n        case '-': return -value[1];\n        default: throw new Error(`Unknown unary operator: ${value[0].text}`);\n    }\n}\n\nfunction applyBinary(first: number, second: [Token<TokenKind>, number]): number {\n    switch (second[0].text) {\n        case '+': return first + second[1];\n        case '-': return first - second[1];\n        case '*': return first * second[1];\n        case '/': return first / second[1];\n        default: throw new Error(`Unknown binary operator: ${second[0].text}`);\n    }\n}\n\nconst TERM = rule<TokenKind, number>();\nconst FACTOR = rule<TokenKind, number>();\nconst EXP = rule<TokenKind, number>();\n\nTERM.setPattern(\n    alt(\n        apply(tok(TokenKind.Number), applyNumber),\n        apply(seq(alt(str('+'), str('-')), TERM), applyUnary),\n        kmid(str('('), EXP, str(')'))\n    )\n);\n\nFACTOR.setPattern(\n    lrec_sc(TERM, seq(alt(str('*'), str('/')), TERM), applyBinary)\n);\n\nEXP.setPattern(\n    lrec_sc(FACTOR, seq(alt(str('+'), str('-')), FACTOR), applyBinary)\n);\n\nfunction evaluate(expr: string): number {\n    return expectSingleResult(expectEOF(EXP.parse(lexer.parse(expr))));\n}\n\nconsole.log(`(1 + 2) * (3 + 4) = ${evaluate('(1 + 2) * (3 + 4)')}`);","lang":"typescript","description":"Demonstrates how to define a simple arithmetic expression parser using `typescript-parsec`, including lexer definition, parser rules for terms, factors, and expressions, and the evaluation of an example string."},"warnings":[{"fix":"Pin `typescript-parsec` to a specific version (e.g., `\"typescript-parsec\": \"0.3.4\"`) and review GitHub release notes/changelogs before upgrading.","message":"As a pre-1.0 library (current version 0.3.4), the API is considered unstable and may introduce breaking changes in minor or even patch versions. Developers should pin exact versions and review release notes carefully for updates.","severity":"breaking","affected_versions":"0.x.x"},{"fix":"For performance-critical applications, consider optimizing grammar rules, introducing manual memoization, or breaking down large parsing tasks into smaller, manageable steps.","message":"For very large input strings or highly complex/ambiguous grammars, unoptimized parser combinators can lead to performance bottlenecks or stack overflow errors due to excessive recursion and backtracking. The library might not include automatic memoization.","severity":"gotcha","affected_versions":">=0.3.0"},{"fix":"Implement custom error recovery logic or augment default error messages with more contextual information by wrapping parser results or extending core functionalities.","message":"The default error messages generated by parser combinators can sometimes be generic, making it challenging to debug complex parsing failures in production. Pinpointing the exact location and nature of a syntax error might require custom error handling.","severity":"gotcha","affected_versions":">=0.3.0"}],"env_vars":null,"last_verified":"2026-04-19T00:00:00.000Z","next_check":"2026-07-18T00:00:00.000Z","problems":[{"fix":"Ensure `buildLexer` is correctly initialized with token definitions and that `rule.setPattern()` has been called for all rules before invoking `.parse()` on them.","cause":"Attempting to call `.parse()` on a lexer or parser object that hasn't been properly initialized or assigned.","error":"TypeError: Cannot read properties of undefined (reading 'parse')"},{"fix":"Review your grammar rules to ensure they can consume the entire expected input. Utilize `expectEOF()` appropriately at the end of the top-level parser to enforce full input consumption.","cause":"The parser successfully consumed a prefix of the input, but unexpected tokens remained at the end, indicating the grammar did not consume the entire input string.","error":"SyntaxError: Expected EOF"}],"ecosystem":"npm"}