Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The TypeScript [13] programming language has become a widely used alternative to JavaScript for developing web applications. TypeScript is a superset of JavaScript adding language features that are important when developing and maintaining larger applications. Most notably, TypeScript provides optional types, which not only allows many type errors to be detected statically, but also enables powerful IDE support for code navigation, auto-completion, and refactoring. To allow TypeScript applications to use existing JavaScript libraries, the typed APIs of such libraries can be described in separate declaration files. A public repository exists containing declaration files for more than 2000 libraries, and they are a critical component of the TypeScript software ecosystem.Footnote 1

Unfortunately, the declaration files are written and maintained manually, which is tedious and error prone. Mismatches between declaration files and the corresponding JavaScript implementations of libraries affect the TypeScript application programmers. The type checker produces incorrect type error messages, and code navigation and auto-completion are misguided, which may cause programming errors and increase development costs. The tool tscheck  [8] has been designed to detect such mismatches, but three central challenges remain. First, the process of constructing the initial version of a declaration file is still manual. Although TypeScript has become popular, many new libraries are still being written in JavaScript, so the need for constructing new declaration files is not diminishing. We need tool support not only for checking correctness of declaration files, but also for assisting the programmers creating them from the JavaScript implementations. Second, JavaScript libraries evolve, as other software, and when their APIs change, the declaration files must be updated. We observe that the evolution of many declaration files lag considerably behind the libraries, which causes the same problems with unreliable type checking and IDE support as with erroneous declaration files, and it may make application programmers reluctant or unable to use the newest versions of the libraries. With the increasing adaptation of TypeScript and the profusion of libraries, this problem will likely grow in the future. For these reasons, we need tools to support the programmers in this co-evolution of libraries and declaration files. Third, tscheck is not sufficiently scalable to handle modern JavaScript libraries, which are often significantly larger than a couple of years ago.

The contributions of this paper are as follows.

  • To further motivate our work, we demonstrate why the state-of-the-art tool tscheck is inadequate for inference and evolution of declaration files, and we describe a small study that uncovers to what extent the evolution of TypeScript declaration files typically lag behind the evolution of the underlying JavaScript libraries (Sect. 2).

  • We present the tool tsinfer, which is based on tscheck but specifically designed to address the challenge of supporting programmers when writing new TypeScript declaration files for JavaScript libraries, and to scale to even the largest libraries (Sect. 3).

  • Next, we present the tool tsevolve, which builds on top of tsinfer to support the task of co-evolving TypeScript declaration files as the underlying JavaScript libraries evolve (Sect. 4).

  • We report on an experimental evaluation, which shows that tsinfer is better suited than tscheck for assisting the developer in creating the initial versions of declaration files, and that tsevolve is superior to both tscheck and tsinfer for supporting the co-evolution of declaration files (Sect. 5).

2 Motivating Examples

The PixiJS Library. PixiJSFootnote 2 is a powerful JavaScript library for 2D rendering that has been under development since 2013. A TypeScript declaration fileFootnote 3 was written manually for version 2.2 (after some incomplete attempts), and the authors have since then made numerous changes to try to keep up-to-date with the rapid evolution of the library. At the time of writing, the current version of PixiJS is 4.0, and the co-evolution of the declaration file continues to require substantial manual effort as testified by the numerous commits and issues in the repository. Hundreds of library developers face similar challenges with building TypeScript declaration files and updating them as the libraries evolve.

From Checking to Inferring Declaration Files. To our knowledge, only one tool exists that may alleviate the manual effort required: tscheck  [8]. This tool detects mismatches between a JavaScript library and a TypeScript declaration file. It works in three phases: (1) it executes the library’s initialization code and takes a snapshot of the resulting runtime state; (2) it then type checks the objects in the snapshot, which represent the structure of the library API, with respect to the TypeScript type declarations; (3) it finally performs a light-weight static analysis of each library function to type check the return value of each function signature. This works well for detecting errors, but not for inferring and evolving the declaration files. For example, running tscheck on PixiJS version 2.2 and a declaration file with an empty PIXI module (mimicking the situation where the module is known to exist but its API has not yet been declared) reports nothing but the missing properties of the PIXI module, which is practically useless. In comparison, our new tool tsinfer is able to infer a declaration file that is quite close to the manually written one. Figure 1 shows the automatically inferred declaration for one of the classes in PixiJS version 2.2. The declaration is not perfect (the types of frameId, crossorigin, scaleMode, and shader could be more precise), but evidently such output is a better starting point when creating the initial version of a declaration file than starting completely from scratch.

Fig. 1.
figure 1

Example output from tsinfer, when run on PixiJS version 2.2.

Evolving Declaration Files. The PixiJS library has recently been updated from version 3 to version 4. Using tscheck as a help to update the declaration file would not be particularly helpful. For example, running tscheck on version 4 of the JavaScript file and the existing version 3 of the declaration file reports that 38 properties are missing on the PIXI object, without any information about their types. Moreover, 15 of these properties are also reported if running tscheck on version 3 of the JavaScript file, since they are due to the developers intentionally leaving some properties undocumented. Our experiments presented in Sect. 5 show that many libraries have such intentionally undocumented features, and some also have properties that intentionally exist in the declaration file but not in the library.Footnote 4 While tsinfer does suggest a type for each of the new properties, it does not have any way to handle the intentional discrepancies. Our other tool tsevolve attempts to solve that problem by looking only at differences between two versions of the JavaScript implementation and is thereby better at only reporting actual changes. When running tsevolve on PixiJS version 3 and 4, it reports (see Fig. 2(a)) that 8 properties have been removed and 24 properties have been added on the PIXI object. All of these correctly reflect an actual change in the library implementation, and the declaration file should therefore be updated accordingly. This update inevitably requires manual intervention, though; in this specific case, PrimitiveShader has been removed from the PIXI object but the developers want to keep it in the declarations as an internal class, and TransformManual, although it is new to version 4, is a deprecated alias for the also added TransformBase.

Fig. 2.
figure 2

Example output from tsevolve, when run on PixiJS versions 3 and 4.

Changes in a library API from one version to the next often consist of extensions, but features are also sometimes removed, or types are changed. As an example of the latter, one of the changes from version 3 to 4 for PixiJS was changing the type of the field stencilMaskStack in the class RenderTarget from type PIXI.StencilMaskStack to type PIXI.Graphics[]. The developer updating the declaration file noticed that the field was now an array, but not that the elements were changed to type PIXI.Graphics, so the type was erroneously updated to PIXI.StencilMaskStack[]. In comparison, tsinfer reports the change correctly as shown in Fig. 2(b).

A Study of Evolution of Type Declarations. To further motivate the need for new tools to support the co-evolution of declaration files as the libraries evolve, we have measured to what extent existing declaration files lag behind the libraries.Footnote 5 We collected every JavaScript library that satisfies the following conditions: it is being actively developed and has a declaration file in the DefinitelyTyped repository, the declaration file contains a recognizable version number, and the library uses git tags for marking new versions, where we study the commits from January 2014 to August 2016. This resulted in 49 libraries. By then comparing the timestamps of the version changes for each library and its declaration file, respectively (where we ignore patch releases and only consider major.minor versioning), we find that for more than half of the libraries, the declaration file is lagging behind by at least a couple of months, and for some more than a year. This is notable, given that all the libraries are widely used according to the github ratings, and it seriously affects the usefulness of the declaration files in TypeScript application development.

Interestingly, we also find many cases where the version number found in the declaration file has not been updated correctly along with the contents of the file.Footnote 6 Not being able to trust version numbers of course also affects the usability of the declaration files. For some high-profile libraries, such as jQuery and AngularJS, the declaration files are kept up-to-date, which demonstrates that the developers find it necessary to invest the effort required, despite the lack of tool support. We hope our new tools can help not only those developers but also ones who do not have the same level of manual resources available.

Scalability. In addition to the limitations of tscheck described above, we find that its static analysis component, which we use as a foundation also for tsinfer and tsevolve, is not sufficiently scalable to handle the sizes and complexity of contemporary JavaScript libraries. In Sect. 3.2 we explain how we replace the unification-based analysis technique used by tscheck with a more precise subset-based one, and in Sect. 5 we demonstrate that this modification, perhaps counterintuitively, leads to a significant improvement in scalability. As an example, the time required to analyze Moment.js is improved from 873 s to 12 s, while other libraries simply are not analyzable in reasonable time with the unification-based approach.

3 tsinfer: Inference of Initial Type Declarations

Our inference tool tsinfer works in three phases: (1) it concretely initializes the library in a browser and records a snapshot of the resulting runtime state, much like the first phase of tscheck (see Sect. 2); (2) it performs a static analysis of all the functions in that snapshot, similarly to the third phase of tscheck; (3) lastly it emits a TypeScript declaration file. As two of the phases are quite similar to the approach used by tscheck, we here focus on what tsinfer does differently.

3.1 The Snapshot Phase

In JavaScript, library code needs to actively put entry points into the heap in order for it to be callable by application code. This initialization, however, often involves complex metaprogramming, and statically analyzing the initialization of a library like jQuery can therefore be extremely complicated [2]. We sidestep this challenge by concretely initializing the library in a real browser and recording a snapshot of the heap after the top-level code has finished executing. This is done in the same way as described by tscheck, and we work under the same assumptions, notably, that the library API has been established after the top-level code has executed. We have, however, changed a few things.

For all functions in the returned snapshot, we record two extra pieces of information compared to tscheck: (1) the result of calling the function with the new operator (if the call returned normally), which helps us determine the structure of a class if the function is found to be a constructor; (2) all calls to the function that occur during the initialization, which we use to seed the static analysis phase.

The last step is to create a class hierarchy. JavaScript libraries use many different and complicated ways of creating their internal class structures, but after the initialization is done, the vast majority of libraries end up with constructor functions and prototype chains. The class hierarchy is therefore created by making a straightforward inspection of the prototype chains.

3.2 The Static Analysis Phase

The static analysis phase takes the produced snapshot as input and performs a static analysis of each of the functions. It produces types for the parameters and the return value of each function.

The analysis is an unsound, flow-insensitive, context-insensitive analysis that has all the features described in previous work [8], including the treatment of properties and native functions. There are, however, some important changes.

tscheck analyzes each function separately, meaning that if a function f calls a function g, this information is ignored when analyzing function g. This works well for creating an analysis such as tscheck that only infers the return type of functions. When creating an analysis that also infers function parameter types, the information gained by observing calls to a function is important. Our analysis therefore does not analyze each function separately, but instead performs a single analysis that covers all the functions.

While tscheck opts for a unification-based analysis, we find that switching to a subset-based analysis is necessary to gain the scalability needed to infer types for the bigger JavaScript libraries, as discussed in Sect. 2. The subset-based analysis is similar to the one described by Pottier [15], as it keeps separate constraint variables for upper-bounds and lower-bounds. After the analysis, the types for the upper-bound and lower-bound constraint variables are merged to form a single resulting type for each expression.

Compared to tscheck, some constraints have been added to improve precision for parameter types, for example, so that the arguments to operators such as - and * are treated as numbers. (Due to the page limit, we omit the actual analysis constraints used by tsinfer.)

A subset-based analysis gives more precise dataflow information compared to a unification-based analysis, however, more precise dataflow information does not necessarily result in more precise type inference. For example, consider the expression foo = bar || "", where bar is a parameter to a function that is never called within the library. A unification-based analysis, such as tscheck, will unify the types of foo, bar and "", and thereby conclude that the type of bar is possibly a string. A more precise subset-based analysis will only constrain the possible types of foo to be a superset of the types of bar and "", and thereby conclude that the type of bar is unconstrained. In a subset-based analysis with both upper-bound and lower-bound constraint variables, the example becomes more complicated, but the result remains the same. This shows that changing from unification-based to subset-based analysis does not necessarily improve the precision of the type inference. We investigate this experimentally in Sect. 5.

3.3 The Emitting Phase

The last phase of tsinfer uses the results of the preceding phases to emit a declaration for the library. A declaration can be seen as a tree structure that resembles the heap snapshot, so we create the declaration by traversing the heap snapshot and converting the JavaScript values to TypeScript types, using the results from the static analysis when a function is encountered.

Implementing this phase is conceptually straightforward, although it does involve some technical complications, for example, handling cycles in the heap snapshot and how to combine a set of recursive types into a single type.

4 tsevolve: Evolution of Type Declarations

The goal of tsevolve is to create a list of changes between an old and a new version of a JavaScript library. To do this it has access to three input files: the JavaScript files for the old version old.js and the new version new.js and an existing TypeScript declaration file for the old version old.d.ts.

To find the needed changes for the declaration file, a naive first approach would be to compare old.d.ts with the output of running tsinfer on new.js. However, this will result in a lot of spurious warnings, both due to imprecisions in the analysis of new.js, but also because of intentional discrepancies in old.d.ts, as discussed in Sect. 2.

Instead we choose a less obvious approach, where tsevolve uses tsinfer to generate declarations for both old.js and new.js. These declarations are then traversed as trees, and any location where the two disagree is marked as a change. The output of this process will still contain spurious changes, but unchanged features in the implementation should rarely appear as changes, as imprecisions in unchanged features are likely the same in both versions. We then use old.d.ts to filter out the changes that concern features that are not declared in old.d.ts, which removes many of the remaining spurious changes. Relevant function sources code from old.js and new.js are also printed as part of the output, which allows for easy manual identification of many of the remaining spurious changes. As the analysis does not have perfect precision, it is necessary to manually inspect and potentially adjust the suggested changes before modifying the declaration file.

As an extra feature, in case a partially updated declaration file for the new version is available, tsevolve can use that file to filter out some of the changes that have already been made.

5 Experimental Evaluation

Our implementations of tsinfer and tsevolve, which together contain around 20000 lines of Java code and 1000 lines of JavaScript code, are available at http://www.brics.dk/tstools/.

We evaluate the tools using the following research questions.

  • RQ1: Does the subset-based approach used by tsinfer improve analysis speed and precision compared to the unification-based alternative?

  • RQ2: A tool such as tscheck that only aims to check existing declarations may blindly assume that some parts of the declarations are correct, whereas a tool such as tsinfer must aim to infer complete declarations. For this reason, it is relevant to ask: How much information in declarations is blindly ignored by tscheck but potentially inferred by tsinfer?

  • RQ3: Can tsinfer infer useful declarations for libraries? That is, how accurate is the structure of the declarations and the quality of the types compared to handwritten declarations?

  • RQ4: Is tsevolve useful in the process of co-evolving declaration files as the underlying libraries evolve? In particular, does the tool make it possible to correctly update a declaration file in a short amount of time?

We answer these questions by running the tools on randomly selectedJavaScript libraries, all of which have more than 5000 stars on GitHub and a TypeScript declaration file of at least 100 LOC. Our tools do not yet support the require function from Node.js,Footnote 7 so we exclude Node.js libraries from this evaluation. All experiments have been executed on a Windows 10 laptop with 16 GB of RAM and an Intel i7-4712MQ processor running at 1.5 GHz.

RQ1 (Subset-Based vs. Unification-Based Static Analysis)

To compare the subset-based and unification-based approaches, we ran tsinfer on 20 libraries. The results can be found in the left half of Table 1. The Funcs column shows the number of functions analyzed for each library. The Unification and Subset columns show the analysis time for the unification-based and subset-based analysis, respectively, using a timeout of 30 min.

Table 1. Analysis speed and precision.

The results show that our subset-based analysis is significantly faster than the unification-based approach. This is perhaps counterintuitive for readers familiar with Andersen-style [1] (subset-based) and Steengaard-style [20] (unification-based) pointer analysis for e.g. C or Java. However, it has been observed before for JavaScript, where the call graph is usually inferred as part of the analysis, that increased precision often boosts performance [2, 19].

We compared the precision of the two approaches by their ability to infer function signatures on the libraries where the unification-based approach does not reach a timeout. Determining which of two machine generated function signatures is the most precise is difficult to do objectively, so we randomly sampled some of the function signatures and manually determined their precision. To minimize bias, each pair of generated function signatures was shown randomly.

The results from these tests are shown in the right half of Table 1 where the function signatures have been grouped into four categories: Unification (the unification-based analysis inferred the most precise signature), Subset (the subset-based analysis was the most precise), Equal (the two approaches were equally precise), and Unclear (no clear winner). The results show that the subset-based approach in general infers better types than the unification-based approach. The unification-based did in some cases infer the best type, which is due to the fact that a more precise analysis does not necessarily result in a more precise type inference, as explained in Sect. 3.2.

RQ2 (Information Ignored by tscheck but Considered by tsinfer)

tscheck only checks the return types of the functions where the corresponding signature in the declaration file do not have a void/any return type, which may detect many errors, but the rest of the declaration file is blindly assumed to be correct. In contrast, tsinfer infers types for all functions, including their parameters, and it also infers classes and fields.

Table 2 gives an indication of the amount of extra information that tsinfer can reason about compared to tscheck. For each library, we show the number of functions that have return type void or any (and in parentheses the total number of functions), and the number of parameters, classes, and fields, respectively. The numbers are based on the existing handwritten declaration files.

We see that on the 20 benchmarks, tscheck ignores 1714 of the 4224 functions, silently assumes 5628 parameter types to be correct, and ignores 1436 instance fields spread over 416 classes. In contrast tsinfer, and thereby also tsevolve, does consider all these kinds of information.

Table 2. Features in handwritten declaration files ignored by tscheck but taken into account by tsinfer.

RQ3 (Usefulness of tsinfer)

As mentioned in Sect. 2, tscheck is effective for checking declarations, but not for inferring them. We are not aware of any other existing tool that could be considered as an alternative to tsinfer. To evaluate the usefulness of tsinfer, we therefore evaluate against existing handwritten declaration files, knowing that these contain imprecise information.

We first investigate the ability of tsinfer to identify classes, modules, instance fields, methods, and module functions (but without considering inheritance relationships between the classes and types of the fields, methods, and functions). These features form a hierarchy in a declaration file. For example, PIXI.Matrix.invert identifies the invert method in the Matrix class in the PIXI module of PixiJS. When comparing the inferred features with the ones in the handwritten declaration files, a true positive (TP) is one that appears in both, a false positive (FP) exists only in the inferred declaration, and a false negative (FN) exists only in the handwritten declaration. In case of FP or FN we exclude the sub-features from the counts. The quality of the types of the fields and methods is investigated later in this section; for now we only consider their existence.

Table 3. Precision of inferring various features of a declaration file.

The counts are shown in Table 3, together with the resulting precision (Prec) and recall (Rec). We see that tsinfer successfully infers most of the structure of the declaration files, although some manual post-processing is evidently necessary. For example, 80.9% of the classes and 95.7% of the fields are found by tsinfer. Having false positives in an inferred declaration (i.e., low precision) is less problematic than false negatives (i.e., low recall): it is usually easier to manually filter away extra unneeded information than adding information that is missing in the automatically generated declarations.

The identification of classes, modules, methods, and module functions in tsinfer is based entirely on the snapshots (Sect. 3.1), so one might expect 100% precision for those counts. (Identification of fields is partly also based on the static analysis.) The main reason for the non-optimal precision is that many features are undocumented in the manually written declarations. By manually inspecting these cases, we find that most of these are likely intentional: although they are technically exposed to the applications, the features are meant for internal use in the libraries and not for use by applications. Non-optimal recall is often caused by intentional discrepancies as discussed in Sect. 2 or by libraries that violate our assumption explained in Sect. 3.1 about the API being fully established after the initialization code has finished. Other reasons for non-optimal precision or recall are simply that the handwritten declaration files contain errors or, in cases where the version number is not clearly stated in declaration file, we were unable to correctly determine which library version it is supposed to match.

To measure the quality of the inferred types of fields and methods, we again used the handwritten declaration files as gold standard and this time manually compared the types, in places where the inferred and handwritten declaration files agreed about the existence of a field or method. Such a comparison requires some manual work, so we settled for sampling: for each library, we compared 50 fields and 100 methods (thereof 50 that were classified as constructors), or fewer if not that many were found in the library.

The result of this comparison can be seen in Table 4 where Perfect means that the inferred and handwritten type are identical, Good means that the inferred type is better than having nothing, Any means that the main reason for the sample not being perfect is that either the inferred or the handwritten type is any, Bad means that the inferred type is far from correct, and No params means that the inferred type has no parameters while the handwritten does. Obviously, this categorization to some extent relies on human judgement, but we believe it nevertheless gives an indication of the quality of the inferred types. An example in the Good category is in PixiJS where tsinfer infers a perfect type for the PIXI.Matrix().applyInverse method, except for the first argument where it infers the type {x: number, y: number} instead of the correct PIXI.Point.

Table 4. Measuring the quality of inferred types of fields and methods.

As can be seen in Table 4, the types inferred for fields are perfect in most cases, and none of them are categorized as Bad. The story is more mixed for method types. Here, there are relatively fewer perfect types, but function signatures are also much more complex, given that they often contain multiple parameters as well as a return type, and parameters can sometimes be extremely difficult to infer correctly. For many method types categorized as Good, the overall structure of the inferred type is correct but some spurious types appear in type unions for some of the parameters or the return type, or, as in the example with applyInverse, an object type is inferred whose properties is a subset of the properties in the handwritten type. The main reason that some method types are categorized as No params is that our analysis is unable to reason precisely about the built-in function Function.prototype.apply and the arguments object. We leave it as future work to explore more precise abstractions of these features.

RQ4 (Usefulness of tsevolve)

To evaluate if tsevolve can assist in evolving declaration files, we performed a case study where tsevolve was used for updating declaration files in 7 different evolution scenarios. In each case, we used the output from tsevolve to make a pull request to the relevant repository. All of these libraries have more than 10000 stars on GitHub and had a need for the declaration file to be updated, but were otherwise randomly selected. We had no prior experience in using any of the libraries.

The output from tsevolve is a list of changes for each declaration file. We took the output lists from each of the 7 updates and classified each entry in each list based upon how useful it was in the process of evolving the specific library.

Table 5. Classification of tsevolve output.

The result of this can be seen in Table 5 where each change listed by tsevolve is counted in one of the four columns. TP counts true positives, i.e. changes that reflect an actual change in the library that should be reflected in the declaration file. Both FP and FP* count false positives, the difference being that changes counted in FP* could easily be identified as spurious by looking at the output from tsevolve, as explained in Sect. 4. Unclear counts the listed changes that could not be easily categorized.

In the update from Ember.js version 1.13 to version 2.0, all of the 24 in the Bad category are due to Ember.js breaking our assumption about the API being fully established after the top-level code has executed. None of the other libraries violate that assumption.

In the update of Handlebars.js from version 3 to 4, all the 59 in the Unclear category are due to the structures of the handwritten and the inferred declaration files being substantially different. tsevolve is therefore not able to automatically filter out undocumented features, and all 59 entries are therefore filtered out manually.

From Table 5 we can see that the output from tsevolve mostly points out changes that should be reflected in the corresponding declaration file. Among the spuriously reported changes, most of them can easily be identified as being spurious and are therefore not a big problem.

Table 6. Pull requests sent based in tsevolve output (The pull requests: https://gist.github.com/webbiesdk/f82c135fc5f67b0c7f175e985dd0c889).

These outputs of tsevolve were used to create pull requests, which are described in Table 6. For each pull request, we show how many lines the pull request added and removed in the declaration file,Footnote 8 along with a response from a library developer, if one was given. For Handlebars.js, the pull request additionally contains a few corrections of errors in the declaration file that were spotted while reviewing the report from tsinfer. All 7 pull requests were accepted without any modifications to the changes derived from the tsevolve output.

The total working time spent going from tsevolve output to finished pull requests was approximately one day, despite having no prior experience using any of the libraries. Without tool support, creating such pull requests, involving a total of 407 lines added and 883 lines removed, for libraries that contain a total of 129365 lines of JavaScript code across versions and declaration files containing 3938 lines (after the updates), clearly could not have been done in the same amount of time.

6 Related Work

The new tools tsinfer and tsevolve build on the previous work on tscheck  [8], as explained in detail in the preceding sections. Other research on TypeScript includes formalization and variations of its type system [4, 17, 18, 22], and several alternative techniques for JavaScript type inference exist [6, 11, 16], however, none of that work addresses the challenges that arise when integrating JavaScript libraries into typed application code.

The need for co-evolving declaration files as the underlying libraries evolve can be viewed as a variant of collateral evolution [14]. By using our tools to increase confidence that the declaration files are consistent with the libraries, the TypeScript type checker becomes more helpful when developers upgrade applications to use new versions of libraries.

Our approach to analyze the JavaScript libraries differs from most existing dataflow and type analysis tools for JavaScript, such as, TAJS [2, 9] and SAFE [3], which are whole-program analyzers and not sufficiently scalable and precise for typical JavaScript library code. We circumvent those limitations by concretely executing the library initialization code and using a subset-based analysis that is inspired by Pottier [15], Rastogi et al. [17], and Chandra et al. [6].

Other languages, such as typed dialects of Python [10, 23], Scheme [21], Clojure [5], Ruby [12], and Flow for JavaScript [7], have similar challenges with types and cross-language library interoperability, though not (yet) at the same scale as TypeScript. Although tsinfer and tsevolve are designed specifically for TypeScript, we believe our solutions may be more broadly applicable.

7 Conclusion

We have presented the tools tsinfer and tsevolve and demonstrated how they can help programmers create and maintain TypeScript declaration files. By making the tools publicly available, we hope that the general quality of declaration files will improve, and that further use of the tools will provide opportunities for fine-tuning the analyses towards the intentional discrepancies found in real-world declarations.