- From: Christian Chiarcos <christian.chiarcos@web.de>
- Date: Fri, 22 Jan 2021 12:59:20 +0100
- To: Dave Raggett <dsr@w3.org>
- Cc: public-cogai <public-cogai@w3.org>
Received on Friday, 22 January 2021 12:00:28 UTC
> > All of this is consistent with a pipelined approach to natural language > understanding, where processing occurs concurrently at different stages > along the pipeline. This avoids backtracking, but includes a mechanism to > reprocess text from a problem word when a problem is detected. > I don't want to be petty-minded, but isn't "reprocess text from a problem word when a problem is detected" the very definition of backtracking ? -- Just kidding ;) I agree that a pipelined approach with some concurrent processing is probably the most realistic thing a cognitive architecture can be made efficient. We don't really have a good term for that. Strictly speaking, that's not a pipeline (in the NLP sense, at least) but more a parallel laying of pipes -- if you will; the metaphor doesn't really fit either. > I find this exciting as there are many clues that point to the > requirements for a functional model of NLU, NLG and language learning, and > the challenge is to experiment with ideas for realising those requirements > in a simple way. We will then be able to test how performance scales with > the length of a sentence and other attributes. > Looking forward to explore that ;) All the best, Christian
Received on Friday, 22 January 2021 12:00:28 UTC