- From: Jeff Kline <jeffrey.l.kline@gmail.com>
- Date: Tue, 23 Jan 2024 16:31:26 +0000
- To: David Fazio <dfazio@helixopp.com>, "public-maturity@w3.org" <public-maturity@w3.org>, Charles LaPierre <charlesl@benetech.org>
- Message-ID: <BN0P223MB00229FC9CE3700AC512E7020A7742@BN0P223MB0022.NAMP223.PROD.OUTLOOK.COM>
David, Please add a New Business Item to the agenda for the Jan 31 ( I have a conflict tomorrow) meeting: Maturity Model Scoring. Background is below Regards, [A picture containing text, black, clock Description automatically generated]<http://strategicaccessibility.com/> jeffrey.l.kline@gmail.com 5 1 2 . 4 2 6 . 9 7 7 9 From: Charles LaPierre <charlesl@benetech.org> Date: Monday, January 22, 2024 at 1:23 PM To: Jeff Kline <jeffrey.l.kline@gmail.com> Subject: RE: [Maturity] Model Task Force 01-17-24 Meeting Agenda Hi Jeff, see below. From: Jeff Kline <jeffrey.l.kline@gmail.com> Sent: Friday, January 19, 2024 4:46 PM To: Charles LaPierre <charlesl@benetech.org> Subject: Re: [Maturity] Model Task Force 01-17-24 Meeting Agenda Hi Charles, <jk> Yes, was great. It would be interesting to know how the results will drive next steps and action plans. Also is there a plan to reassess again in six months/a year? <cl> Yeah, we are incorporating this into our quarterly OKR’s which will get re-evaluated every quarter. Like I said, I picked 4 proof points in K&S to work on as a company and then picked out a couple other proof points for each department based on their activity in the company they could also work on this year. I plan to do a re-evaluation at the end of the year to see how much of an impact in scores we had year over year. <jk> I started thinking while falling asleep last night 😳 about the ratings within the dimensions. Something about them is really bugging me, and I think a few of us need to put our heads together to simplify and make them more meaningful. <cl> Agreed. <jk> Here are some thoughts, * For example, when going from no accessibility policy to a published policy, the completed (Optimize Stage) proof point is the true deliverable. * When assessing this proof point, the Excel sheet currently allows 1 point for “starting activity” in the launch stage, then again 1 point for “starting activity” in the other (Integrate and Optimize) stages. I thought about some sort of weightings for each of the stage, but not sure that is meaningful either. <cl> We weight the “started” *1, “partially Implemented” *2, “complete” *3, but you are right we don’t then weight anything with the inactive / Optimize stage they are all the same * <jk> 4 levels of progress (“not started”, “started”, “partially implemented”, and “complete” ) for each proof point within each dimension is too granular (and subjective I would argue) complex, and confusing. Assigning values / multipliers (2X for “partially completed’ and 3X for “complete”) at these stages isn’t very useful IMO <cl> Yeah, I hear you, would be good to simplify. I think combining started / partially implemented into one option would be good. 0 points for not started, 1 for partial, 2 for Complete maybe? But that combined with getting more points as you move towards Optimized. * <jk> A simpler approach: Let’s say that a proof point has either made the criteria for that stage or it doesn’t (checkbox / no checkbox) as evaluated against the general ”outcomes” defined for each phase. So I am thinking we may need to eliminate the granular metrics at the stages. <cl> Maybe. That could be good not sure if binary works or if we need 3 states like I mentioned above. * <jk> I do like the current implementation of counting the number of proof points that have entered each stage but eliminating the multipliers. That makes sense. For example, “8 out of 17 proof points have entered or have activity in the “integrate” stage. That’s easy to look at and understand where progress is being made, and where it isn’t. The next time the organization is assessed, they update the counts for each stage, and NO proof point can have entries in multiple stages. Only the most recent. This can be done programmatically when we get to that juncture. <cl> Right. * <jk> We could also, if really pushed, have a scoring system that provides partial credit for progress of each proof point at the stage completions as a percentage of completed proof point in optimize stage. * Proof point in No Activity stage= no checkmark so 0% * Proof point in Launch stage= considered 25% of the way to Optimize (complete) * Proof point in Integrate stage= considered 55% of the way to Optimize (complete) * Proof point in Optimize stage= 100% of the way to Optimize (complete) But even that, I’m not sure how valuable this would be, and could introduce confusions and distortions, so not sure I would recommend this approach. <cl> Interesting. Would need to see a sample and see how different scoring algorithms work and if it makes it easier to understand the numbers easier or not. * <jk> For reporting / visualizing overall progress for the entire organization or at a department level, dashboards with 7 Axis (one for each dimension) spider plots (as I suggested in last week’s call) They can be used to report progress on the number of completed (Optimize) proof points for each dimension (like the spider example) or provide pie charts, etc. to show the breakdown of proof points by stage for a given dimension, etc. Once the counts of what proof points are in what stages are input into the tool, there would be infinite ways to display the data. <cl> I liked the spider plot idea, would be interesting to see these plots for different companies or departments for some ideal / example cases. i.e. Company with perfect accessibility Maturity, vs. company with 0 accessibility maturity, then some others in between where they are doing well in some areas and are only at the Launch stage in others dimensions. <cl> We really need to see the different stages from None , Launch, Integrate , Optimize per dimension shown which I don’t think we really have yet with just those numbers to be honest. They give a slight indication but not really as I showed when you take out all the Not Applicable etc. <jk> I’m thinking to add a discussion as an agenda item...perhaps proposing a subgroup to take it on. <cl> Sounds good. <jk> Hope this was at least somewhat clear. <cl> Yeah great analysis, totally agree with you. <jk> Your thoughts? jeff
Attachments
- image/png attachment: image001.png
Received on Tuesday, 23 January 2024 16:31:34 UTC