Summary: Looks like there was lint, but `make lint` wasn't failing on it
because it would just automatically fix it! This removes the `--fix`
from `eslint` and fixes the lint.
Test Plan:
- `make lint`
Auditors: kevinb
It looks like the `text-align: center` is affecting the text in sub and
superscripts. Fixes#447
Test Plan:
- Visit [this
example](http://localhost:7936/?text=x%5El_%7Bi%5E%7Bl%2B1%7D%2Bi%2C%20j%5E%7Bl%2B1%7D%2Bj%2C%20d%7D)
- Edit the HTML to add `<span class="katex-display">...</span>` around the
`<span class="katex">` node.
- See that the sub and superscripts are left-aligned, not centered
(It looks like we don't have a way to test this in the screenshotter for now)
@kevinb
Summary:
Alpert alerted me to the fact that \centerdot and \cdot are
not the same despite what MathJax thinks.
Test Plan:
- make serve
- load http://localhost:7936/
- see the `a \centerdot b` produces a small, bottom-aligned square
Auditors: alpert emily
Summary:
Update the symbol definition for \centerdot so that it does
the same thing as \cdot.
Fixes https://github.com/Khan/KaTeX/issues/421.
Test Plan:
- make serve
- open http://localhost:7936/
- verify that `a \centerdot b` looks the same as `a \cdot b`
Auditors: emily
Summary
We'd like contributors to use the same linter and lint rules that we use
internally. This diff swaps out eslint for jshint and fixes all lint failures
except for the max-len failures in the test suites.
Test Plan:
- ka-lint src
- make lint
- make test
Reviewers: emily
Otherwise it could happen that some Mozilla page gets shown which has a
minimal size larger than the 786px we're requesting. And the screenshot
will span that entire page even if the window is smaller, resulting in a
failure to adjust screenshot size.
See http://kb.mozillazine.org/Browser.startup.homepage_override.mstone
and http://kb.mozillazine.org/Browser.startup.page for details.
Just in case, we also include the docker image digests in the travis build
log, to increase chances of reproducing what we get there.
This is almost like the align* environment, but it starts out in math mode,
so we don't have to worry about the fact that we have no real surrounding
text mode in KaTeX. This is the first step towards align* and align.
Instead of passing around the current position as an argument, we now have a
parser property called pos to keep track of that. Instead of repeatedly
re-lexing at the current position we now have a property called nextToken
which contains the token beginning at the current position. We may need to
re-lex if we switch mode. Since the position is kept in the parser state,
we don't need to return it from parsing methods, which obsoletes the
ParseResult class.
This is an attempt to actually exercise all the code paths which can lead to
a ParserError exception (from malformed user input, without tinkering with
any KaTeX internals or exploiting a KaTeX bug). It documents the current
state of affairs, without changing any error messages. Comments indicate
future work, particularly with respect to the position often associated with
these error messages.
Instead of having our own copy of jasmine in the repository, we use
jasmine-core as an npm dependency and load it from there. That reduces the
size of the repository and helps keeping up to date. We're not using the
transitive dependency on jasmine-core via jasmine, since the jasmine package
might change its dependency any day (although unlikely).
The katex-spec.js shipped from the server now includes all
`test/*[Ss]pec.js` (as matched via glob) so that additional spec files can
be created and will automatically get included in the browser-side test
suite. The contrib specs are not included at this point.
Visit http://0.0.0.0:7936/test/test.html while running server.js to see this
in action and verify the lack of failures.
Jasmine supports node these days, so there is no longer a need to use a
separate (and unmaintained) package to provide such bindings.
Making the switch exposed several misuses of the `toMatch` assertion in the
existing specification. Most of them were converted to `toEqual`, since
`toMatch` is only for matching against regular expressions.
Now Travis can run the screenshotter in verification mode. The files in the
repository will be seen as the expected outcome, and if the actual result
differs from that, it might be attempted four more times before the test
case is actually deemed failed. A timeout between page load and screenshot
should allow any possible font issues to settle down.
Thanks to the docker service provided by Travis CI, we should be able to
download and use the Selenium docker images in order to run our
screenshotter and check whether all the screenshots match the images from
the repository.
Summary:
The ability to use pre-determined character widths will benefit alternative
layout engines such as gagern's canvas layout engine. I would also like to
experiment would using CSS transforms to absolutely position each glyph. This
diff adds a new make rule, make extended_metrics, which generates metrics that
also containing glyph widths.
Test Plan:
- run `make extended_metrics`
- verify that fontMetricsData.js contains entries with 5 numbers instead of 4
Reviewers: emily alpert
There are two main motivations for this commit. One is unicode input, which
requires unicode characters to get past the lexer. See discussion in #261.
The second is in preparation for #266, where we'd deal with one token of
look-ahead but might be lexing that token in an unknown mode in some cases.
The unit test shipped with this commit addresses the latter concern, since
it checks that a math-mode-only token may immediately follow some text mode
content group.
In this new implementation, all the various things that could get matched
have been collected into a single regular expression. The hope is that
this will be beneficial for performance and keep the code simpler.
The code was written with Unicode input in mind, including non-BMP codepoints.
The role of the lexer as a gate keeper, keeping out invalid TeX syntax, has
been abandoned. That role is still fulfilled by the symbols and functions
tables, though, since any input which is neither a symbol nor a command is
still considered invalid input, even though it lexes successfully.
Fixes issue #255.
Mixing the variable number of arguments a function receives from TeX code
with the fixed arguments which the parser provides can cause some confusion.
After this change, a handler will receive exactly two arguments: one is a
context object from which things provided by the parser can be accessed by
name, which allows for simple extensions in the future. The other is the
list of TeX arguments, passed as an array.
If we ever switch to EcmaScript 2015, we might want to use its destructuring
features to name the elements of the args array in the function head. Until
then, destructuring that array manually immediately at the beginning of the
function seems like a useful convention to easily find the meaning of these
arguments.