Re: What we might produce

As important as the DMA and the ACCESS Act are, I'm deeply worried about us
overrotating on any specific proposal given the tumultuous path ahead for
these and other vehicles. E.g. while I was a champion of the Senate's 2019
ACCESS Act, I have concerns with the House version that followed it.

If we're able to abstract up these legislative vehicles into something
resembling norms there might be unique value in digging in there. For
example, it's fairly obvious (IMO) that platforms offering APIs are
permitted to engage in some level of security/privacy practices in how they
authenticate and oversee the use of such interfaces; but that can also be a
cover for a constraint imposed to limit effective interoperability.
Typically laws punt that to a case-by-case approach, i.e. give some
regulator like the FTC in the US the authority to investigate and make a
determination. But there's no clear roadmap/playbook for that agency to use
in undertaking such a review, and so much of the underlying information
will be proprietary that it'll be hard to advise or evaluate the agency's
action. Is there a canonical set of questions we could develop and
recommend to an agency that they ask of the platform to interrogate the
inherent tensions here? (And of course, that same set of q's could be used
by third parties, albeit with less access to the internal data--or by the
platform itself, to try to avoid legal consequences or just plain be
better.)

And - how many more questions like that one are there? [Recap: assuming the
legitimacy of using privacy/security controls, and assuming the
undesirability/possible illegality of using controls with the intent or
effect of inhibiting interoperability+competition, how can we tell the
difference?]

That may feel like putting the cart before the horse to some in this group
and I'm happy to shelve/sidebar it. But it was a "focus on the tough
questions" lens like this that I used in a recent multistakeholder project
in the content management space (see final report here
<https://www.rstreet.org/2021/09/15/applying-multistakeholder-internet-governance-to-online-content-management/>),
and I found it illustrative in many ways.

Cheers,
Chris

On Mon, Dec 6, 2021 at 8:31 AM hellekin <how@zoethical.com> wrote:

> On 12/6/21 8:29 AM, Mark Nottingham wrote:
> > The charter calls out two different kinds of activities that this group
> might pursue (each likely leading to a Report):
> >
> > 1) Where we proactively want to recommend* specifications or
> technologies as a remedy in a defined situation (e.g., social networking,
> chat, etc.), highlighting any gaps
> > 2) Where we want to react to a such a proposal made elsewhere (e.g.,
> when a competition regulator or other national body selects something)
> >
>
> Thank you for this starter Mark.
>
> As a reminder, I'd like to suggest that what happened with the 'upload
> filters' and the Copyright Directive in the EU was detrimental to the
> Internet community. Policy makers gathered the media giants to discuss
> the conditions, ending up with so-called solutions involving time-based
> reply enforcement (i.e., you must reply within X hours, 24/7, which is
> good for a multinational company operating worldwide, but very bad for a
> small actor made of volunteers who do sleep at night) ; had they
> consulted independent service providers, we would certainly have come up
> with volume-based approaches that make giants much more liable than
> smaller actors.
>
> This kind of asymmetry should remain on our radar when thinking about
> "solutions" for technical and policy issues. IRC dwellers may remind the
> time when AOL doubled the size of the Undernet overnight by joining the
> network, and what happened next in terms of cultural di(sso)lution.
> Interoperability must be taken seriously in postcolonial terms as well.
>
> Regards,
>
> ==
> hk
>
>

Received on Monday, 6 December 2021 17:57:05 UTC