[w3c/editing] EditContext: Who's going to handle caret navigation? (#266)

Moving [an EditContext issue](https://github.com/MicrosoftEdge/MSEdgeExplainers/issues/118) from @Reinmar into this repo.

## Problem
On the Editor TF meeting in Paris, back in 2015, we talked a lot about the challenges in handling caret navigation. There are dozens of interactions to handle:

* up/right/down/left arrow keys
* jumping over one word
* jumping to the EOL (home/end)
* shift + the above
* ctrl + home/end
* page up/down
* mouse
* touch
* pen?
* and whatever else will appear in the future or exists already

To add to that, we have mixed RTL and LTR content which is extremely tricky to handle, we have OS specific behaviours, etc.

In other words, it's a very small chance that a JS app will get all those things right. And we agreed on that in Paris.

Reading the EditContext explainer I found an example like:

```javascript
editContainer.addEventListener("keydown", e => {
    // Handle control keys that don't result in characters being inserted
    switch (e.key) {
        case "Home":
            model.updateSelection(...);
            view.queueUpdate();
            break;
        case "Backspace":
            model.deleteCharacters(Direction.BACK);
            view.queueUpdate();
            break;
        ...
    }
});
```
And it seems rather unrealistic. For three reasons:

* "intention" detection will be tricky to get right (OS-specific, key combinations, etc.)
* calculating the results for all the interactions is extremely tricky (page up/down, arrow keys themselves)
* mouse + touch + pen + whatever else come to your mind – you just don't want to do that in JS

## Questions
* Do I get the intention of this explainer right? Was it meant to push all the responsibility to the JS side? I'm asking because at the end it briefly mentions some alternatives, but it's unclear whether these are alternatives to the entire EditContext API or e.g. alternative ways to handle user interactions.
* Followup question – couldn't we have the browser firing events (just like beforeInput and perhaps berforeSelectionChange) for those user interactions mentioned in this ticket? They would not only provide a better intention detection than listening to keydown or mousemove, but can also indicate to the JS app what the browser would normally do itself (by targetRanges e.g.).

## PS
This is my biggest doubt regarding EditContext. The other one is spell checking (will open a separate ticket in a moment). Other than that, I think it's an interesting direction. But a key point for me here is making this API "adoptable" for less complex apps than Google Docs and Office Online. So, apps which do not calculate the entire layout themselves.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/w3c/editing/issues/266

Received on Friday, 14 August 2020 09:07:45 UTC