Web3Privacy Now platform -

Initial project breakdown

  1. Objective: scoring model prototype (plant txt - DB)
    Tasks
  • interview 50 privacy experts
  • aggregate answers into the model
  • experiment with the model
  • validate model with 10 experts (test&try)

Timing: 30 days.

  1. Objective: scoring model feasibility (the right balance between data parsing (APIs) + manual work (test-environment, DB))
    Tasks
  • CTO’ feedback on data aggregation (approach, stack, model flaws)
  • data scientist’ general feedback (approach, challenges)
  • experiment with the data aggregation (parsing validity, data aggregation efficiency)
  • scoring model revision based on semi-DB automation
  • semi-automated scoring model MVP (DB)
  • calculated understanding of manual work with the DB (for project sustainability)

Timing: 20 days.

  1. Objective: web-site deployment
    Tasks
  • design + basic branding
  • copywriting
  • DB sync
  • CMS
  • QA
  • GitHub publishing
  • Forum launching

Timing: 30 days.

Behind the scenes:

  • bug fixing
  • scoring model tweaking
  • content editing
  • social media posting
  • community feedback
  • new projects’ assesment
  • DB fixing & many more

Updates and progress from Mykola, following initial grant application:

Considerations:

  • Make scoring of the privacy objective
  • Bring community (incl experts) into scoring discussion

What we did in the meantime

  • Ccollaborated with ETH Brno team (behind great privacy & security hackathon) to make Web3privacy now a communal project
  • We created an Organization on Git & transferred there all repos: link
  • We are organising privacy conference in Prague (June) > the 1st of many Web3privacy now events. Mario Havel, Manu Azuru, Secret Network founder are already among 1st speakers.
  • Prague event will be complementary to ETH Brno: Prague - “knowledge builder”, Brno - hackathon (tooling) centric.
  • Asked 100 projects on their internal view on privacy for the general public: builder notes
  • I created a concept of Privacy readiness levels (lite tech readiness) & expand it via essay soon > this will create new paradigm how to measure privacy maturity

How we want to make privacy objective & communal

  • Deliver data + analytics to community. Funded research (raw data + scoring model assumptions + alike simplified research paper) is a “topic starter” with deep communal involvement
  • Create space & conditions for discussion. Once research phase 1 will be done > we will launch all the findings on the project’s forum & welcome experts for discussion & dee analysis. Note: we understand that experts could be lazy, so we would go to their Discords & start discussions there (scoring model + their own project). We talk with them there already, but extracting “internal” data.
  • Publish 1st internal + communal findings on the platform. Note: we don’t think that even communal model could be objective from the get-go (why: our findings show that majority of projects don’t allow general public to check their privacy, because “read the docs” or “read the code” approach highly excludes audience). We believe that the 1st take is necessary to create a precedent for industry to make semi-automation tools for privacy assessment. And our role here:
    • Change catalyst. Impacting teams to created new analytics tools (like “Etherscan” was once born out of necessity)
    • Knowledge supplier. We want provide step-by-step guides to scoring assumptions, so everyone would understand “how” & challenges on the way.
  • Time + expert levelness lenses. We want to measure time & skills specs for privacy assessment > publish findings how hard it is for general public to execute it.

Well-balanced scoring model, highly-accurate &/or semi-automated tool - is a matter of time & collaboration with the industry. But for it to appear → extensive research & public “paper” on the state of privacy assessment should be released first.

Further progress:

  1. Non-techies assessment readiness. we surveyed 50+ projects from Privacy&Scalability explorations to Sismo on their reco how non-techies could assess their privacy-readiness & made a table with their answers.
  2. Techies assessment readiness. we created important assumption that “not every techie has been created equal” (think about junior dev vs Lead). So I’m applying grade system of techies to their ability to understand code base, docs, trace transactions, perform cross-networks assessment, cross-solution assessment (“you do wallet, but can you understand Layer-2 like Aztec?”). Also in the table

Useful links:

  • initial idea description: GitHub
  • scoring model progress: GitHub

Current update: link

  • Analysed a scoring model based on a 50+ privacy projects survey
  • Segmented answers into a takeaways prior to “product features”
  • Segmented product features into a potential framework

And here’s a newest framework for non-techies for field testing: click

We aim to find

  • valuable criteria that could be automated in the nearest future (MVP - gathered manually).
  • map down subjective creteria, but with attested values by the community (like what exact private data is more crucial within exploitations) for DYOR
  • highlight the necessary checklists (like “tracing transactions”)
  • impact potential privacy tooling (QA automation)
  • impact privacy-services audit criterias

Feel free to comment.

Note: we are concentration on a “non-techies” at the moment. Because they are the most vulnerable audience in relations with privacy attestations (they can’t “trace transaction” with ease or “read the code”).