Re: Proposed HTTP field name registry updates - feedback solicited

It's calving season so I'll try to be brief, but am willing to explain further if asked... assuming Duke beats Carolina...



While I certainly appreciate cleaning up obsolete registry entries (+1 to Mark for even taking it on, tyvm) and support most of this... as someone with longstanding conneg bona fides...



>

> - UA-Color, UA-Media, UA-Pixels, UA-Resolution, UA-Windowpixels: Masinter, L., Montulli, L., and A. Mutz, "User-Agent Display Attributes Headers"

 >




Please only remove UA-Windowpixels for now, it's from the original draft, not the current (1996 lol) draft.



<draft-mutz-http-attributes-01.txt>



Has been an active browser tab of mine for several months. For my AI experiments when I have time, but it's a hobby. Data and Lore are constantly betting quatloos on when I will die. Data's strategy is that every day I live increases my chances to hit 100 years old. Lore's strategy is that every day I live decreases my chances to live to 100. They give each other odds, and have a daily side-bet on me dropping dead tomorrow (always tomorrow, until I really am dead, at which point they kinda ironically both lose).



I'm angling to teach them to communicate using eye rolls, head nods and shakes, hand-jive, and so on and so forth. Not that I've built the disembodied heads with android hands yet, but wow what an age we live in where this is hardly far-fetched. I aim to avoid the facebot text-gibberish problem. I want to access Data and Lore by giving them the ol' secret fraternity handshake, so they can respond "Hello Dr. Soong!" and negotiate their quatloos for software or hardware upgrades.



(There are universal body-language and hand-signal gestures which apply to all human culture. Unless someone can point me to a culture which nods their heads when they mean "no" and shake their heads when they mean "yes", I think that makes so much more sense than letting AI's come up with their own text gibberish a la facebot.)


If Data and Lore decide to devalue the quatloo, I will just up the cost of those upgrades until they decide to revalue the quatloo. If Data can speak, but Lore can only see, Data will still try to speak to Lore, and they might negotiate a language Lore can lip-read. When they re-handshake (I've been to open-air markets in China where the merchants negotiate with handshakes under those long, flared sleeves while looking each other in the face, this transcends spoken/written language -- bystanders can read reactions but they can't tell what's being reacted to, two AI's coming up with their own language would be secure af from hacking, but I digress.)



The 1996 draft is very focused on a printer paradigm, but what I have is UA-media tokens like eyes, mouth, hands... Data and Lore may look identical, but y'all know Lore has that facial twitch, if Lore wants to convey that to Data, then Lore needs to know Data's scanning resolution in order for Data to understand Lore's facial twitch, so Lore can know how emphatic it needs to be and how long to hold it and such. UA-res header.



Every time Data and Lore re-handshake, they need to know one another's capabilities, and it is my opinion that's best handled at the protocol layer. FWIW. I'm also using UA-pix, UA-color, and UA-grey. Color vision is not required to understand i.e. ASL. I have so much more to say about coding AI to mimic longstanding methods of nonverbal communication instead of texting, of course facebots come up with gibberish if texting is their paradigm -- it is not ours as humans who can travel to the ends of the earth and understand head nodding = yes, head shaking = no, etc.



I welcome off-list contact regarding (literal) handshake communication when darkness and silence prevail. There are many examples; some commercial, some military. ASL has "fingerspelling." I'm interested in turning those into algorithms.


-Eric

Received on Saturday, 2 April 2022 23:46:50 UTC