- From: Bumblefudge <bumblefudge@learningproof.xyz>
- Date: Tue, 30 Jan 2024 05:51:11 +0000
- To: public-swicg@w3.org
- Message-ID: <d69877c1-dcad-4378-b7f4-76733dc252cb@learningproof.xyz>
Reminder: in 10.5 hours this call is happening! On 1/24/2024 6:27 PM, Johannes Ernst wrote: >> Bumblefudge [<bumblefudge@learningproof.xyz>](mailto:bumblefudge@learningproof.xyz) wrote: > >> Please reply-all to the list if you would like to put something on the agenda. > > 1. I can provide a brief update on the status of FediTest. There is some running code, but still experimental and a bit too early to show-and-tell. Excellent! > 2. I could also provide an overview over the responses to my survey on developers and their development setup and virtualization. There are 44 responses so far, some of it is unexpected (to me). (A number that surprised me!) Survey is here: https://apps.dazzlelabs.net/nextcloud/apps/forms/s/ed2WBPzrrWFcWKjT5a9MwMGp Nice! > More importantly, echo’ing some of what Marcus said: > >> On Jan 23, 2024, at 01:17, Marcus Rohrmoser [<me.swicg@mro.name>](mailto:me.swicg@mro.name) wrote: >> >> Yes, however I am unsure how to phrase it. >> >> 1. IMO the tests should benefit the netizens, operators and developers and therefore should be easy to operate and friendly to ad-hoc deploy (rather than 24/7) for unprevileged (non-root) netizens. Without vendor lock in. And tomorrow and the week after as well, without permanent underlying framework changes to be integrated (upwards compatibility. I know it's hard). So I advocate thinking from the end towards the means, not vice versa. What options are there aside php and cgis? I'd love an unpacking of this paragraph to be agenda item, since I don't understand it. I love the idea that netizens and operators/admins have as much at stake as developers, but I'm not sure I can picture what state of affairs would benefit them equally, much less work backwards from it? Should be a fun discussion! >> 2. Many ActivityPub processes are asynchronous in nature and are hard to follow and therefore test. IMO we should encourage a 'friendly feedback' policy to >> a. immediately report back the effect in a machine readable manner, evtl. with an url to track the progress, >> b, if asynchronous, then call back once there is a result, >> c. never silently discard requests. This sounds like something socialweb.coop has been grappling with lately: many of the "harder to test" requirements in AP require results other than pass/fail, and complex layered results. Would be good to sanity check the tentative approach we've been working on, love the name "friendly feedback" :D > 3. I think it would be useful to spend some time on the requirements side of testing for the Fediverse. > > We sometimes wave “testing!!!” around as some kind of magic wand, but as we can see from the various projects, I don’t think we quite agree on what testing should be done, and in particular Why. Also: Who needs it and what does it need to look like so they can get the maximum benefit out of a testing environment and the results of testing? The only magic wands I wave around are the word "composability" and "friendly fork"😉 > 4. Another important subject would be: just where exactly do those tests come from? Who decides what is and isn’t a valid test? That’s particularly important because AS and AP are so flexible. Example: > > * if anybody gets to do anything allowed by AS and AP, hopes of out-of-the-box interop as low, and testing tells developers who want real-world interoperability fairly little. > > * if “passing all tests” is supposed to mean “will interoperate with 90% of the installed fediverse base”, then many tests have to be defined that do not have a root in a W3C or other standards document, but test for conventions deployed by the leading implementations. > > (Personally I believe we need to have both, and tests needs to be organized in a very modular fashion based on the “authority” from which they are derived.) Totally agree! The user running the tests decides which tests are valid, and will ignore any tests they disagree with unless we make it very easy for them to turn them off and replace them with tests they find valid. Ideally, if we serve those users well, they might even "upstream" their modifications and make our simple test suites a borgesian garden of forking paths and optionality😅 I don't think the scope of this CG's testing task force is or should be limited to "writing tests for this CG's ratified specifications", and I want to support people writing profiles of IETF specs or community/living documents not rooted in SDO authority. The only thing that strikes me as out-of-scope (at least of tomorrow's meeting) would be giving the CG's implicit blessing to, or donating its finite bandwidth to, specific implementations or platforms (even if non-commercial!). Maybe I'm being more catholic than the pope, though? We could always scope calls with a single-implementation/platform API focus and declare that scope before scheduling, if there's demand for it? I'm just the facilitator and note-taker, the users and contributors set the agenda here. > So plenty to talk about from my perspective :-) > > Cheers, > > Johannes. > > Johannes Ernst > > [Fediforum](https://fediforum.org/) > [Dazzle Labs](https://dazzlelabs.net/)
Received on Tuesday, 30 January 2024 05:51:28 UTC