Initial commit of Protocol Bicorder with json and txt set up
This commit is contained in:
110
README.md
Normal file
110
README.md
Normal file
@@ -0,0 +1,110 @@
|
||||
# Protocol Bicorder
|
||||
|
||||
The Protocol Bicorder is a diagnostic tool for the study of protocols. It allows a human or machine user to evaluate protocol characteristics along a series of gradient scales.
|
||||
|
||||
The name is a tribute to the [tricorder](https://en.wikipedia.org/wiki/Tricorder), a fictional device in the Star Trek franchise that the characters can use to obtain all manner of empirical data about their surroundings.
|
||||
|
||||
## Using the bicorder
|
||||
|
||||
These instructions can guide humans or machines in utilizing the bicorder. The bicorder, foundationally, consists of a JSON file that may or may not have values filled out. A completed bicorder file should have all or most of the values filled out. Filling out the values can be done either manually, through editing the JSON file directly, or by using software that takes input in other forms and translates it to a JSON reading. These instructions are for editing the JSON file directly, but they should be useful with other interfaces as well.
|
||||
|
||||
The bicorder consists of three components:
|
||||
|
||||
* `metadata` about the current reading, including information about the protocol and its analyst
|
||||
* `diagnostic` composed of sets of gradients that measure the analyst's interpretation of the protocol
|
||||
* `analysis` that interprets the diagnostic data
|
||||
|
||||
### Metadata
|
||||
|
||||
There are several pieces of information that provide metadata about a given reading with the bicorder. More details about the data formats for each input are provided in `bicorder.schema.json`.
|
||||
|
||||
* `protocol`: Name of the protocol or brief description
|
||||
* `analyst`: Name or other identifier of the analyst conducting the diagnostic
|
||||
* `standpoint`: Describe, even at some length, the relationship between the analyst and the protocol, including any relevant context that could affect the diagnostic readings
|
||||
* `timestamp`: A timestamp for when the reading occurred
|
||||
|
||||
### Diagnostic
|
||||
|
||||
To carry out the diagnostic, the analyst should consider the protocol from the perspective of one of the `gradients` at a time. The gradients invite the analyst to determine where the protocol lies between two terms.
|
||||
|
||||
This is inevitably an interpretive exercise, but do your best to identify the most accurate `value`, with `1` being closest to `term_left` and `9` being closest to `term_right`.
|
||||
|
||||
Choosing a `value` in the middle, such as `4`, can mean "a bit of both" or "neither."
|
||||
|
||||
### Analysis
|
||||
|
||||
The Analysis part of the bicorder is meant to be automated. Its `value` fields are based on calculations on the above gradients. Each analysis is also a gradient, whose value is derived from the gradients in the diagnostic. Each analysis has an `instructions` field that explains how to produce the `value` for that analysis.
|
||||
|
||||
## Interfaces
|
||||
|
||||
There are several ways to use the bicorder.
|
||||
|
||||
### Machine-readable JSON
|
||||
|
||||
The most basic way to use the bicorder is to simply edit a JSON file:
|
||||
|
||||
* Make a copy of `bicorder.json` with an appropriate file-name
|
||||
* Fill out the `metadata` and `diagnostic` sections appropriately
|
||||
* Based on the `diagnostic` inputs, process the `analysis` values following the relevant `instructions` fields
|
||||
|
||||
### ASCII chart
|
||||
|
||||
An ASCII chart can be generated from a JSON file. This can be useful as a human-readable output, or as a human-usable way to carry out a diagnostic. The included Python script works both for the `bicorder.json` template or for a completed JSON file with values added.
|
||||
|
||||
Usage:
|
||||
|
||||
`python ascii_bicorder.py [INPUT.json] [OUTPUT.txt]`
|
||||
|
||||
An example output file from the template is maintained at `bicorder.txt`.
|
||||
|
||||
### Human-usable web app
|
||||
|
||||
* Create an online tool for reporting a protocol
|
||||
- Web app for fun that can be used with a mobile phone
|
||||
- Include tooltips for descriptions
|
||||
- Auto-analyze
|
||||
- Enable it to produce a JSON printout
|
||||
|
||||
### Synthetic data analysis
|
||||
|
||||
* Experiment plan
|
||||
- Have AIs fill it out with example protocols
|
||||
- Review papers on synthetic datasets
|
||||
- Perhaps use local models or campus Gemini---try two parallel processes with appropriate supervision. Find literature on this. Do parallel processes on both dataset generation and bicorder coding
|
||||
- Make sure to include a broad range of protocols, reflecting diversity of cultures, mediums, and scales
|
||||
- Data analysis of correlations
|
||||
- Which gradients seem to travel together?
|
||||
- Perhaps convert this or generate it in JSON so it is easily machine-ingestible
|
||||
|
||||
<!---
|
||||
## To do
|
||||
|
||||
* Try data analysis
|
||||
* Continue iterating on the bicorder design
|
||||
* Develop the web app in a way that is tightly bound to the canonical JSON and schema
|
||||
|
||||
### Gradient citations
|
||||
|
||||
- implicit / explicit
|
||||
- See Pomerantz book on Standards, p. 16: de facto and de jure
|
||||
- Social / technical
|
||||
- From Primavera's Protocol Art talk
|
||||
- Kafka / Whitehead (Asparouhva)
|
||||
- Measuring the extent to which the protocol imposes burdens on users as opposed to freeing them to focus on something else
|
||||
- flock / swarm (Fernández)
|
||||
- Measuring the degree of variation as opposed to uniformity the protocol enables
|
||||
- soft / hard (Stark)
|
||||
- dead / alive (Friend; Alston et al.; Walch)
|
||||
- related especially to whether it is actively performed)
|
||||
- insufficient / sufficient (Kittel & Shorin; Rao et al.)
|
||||
- Measuring the extent to which the protocol as such solves the problems it is designed to solve, or whether it relies on external mechanisms
|
||||
- tense / crystallized
|
||||
- Marc-Antoine Parent countered the idea of "engineered arguments" (which assume ongoing tension) with "crystallized arguments" (which memorialize past tensions that are no longer active). For instance, English is tense, Arabic numerals are crystallized.
|
||||
--->
|
||||
|
||||
|
||||
## Authorship and licensing
|
||||
|
||||
Initiated by [Nathan Schneider](https://nathanschneider.info) and available for use under the [Hippocratic License](https://firstdonoharm.dev/) (do no harm!). Several AI assistants were utilized in developing this tool.
|
||||
|
||||
[](https://firstdonoharm.dev/version/3/0/core.html)
|
||||
160
ascii_bicorder.py
Normal file
160
ascii_bicorder.py
Normal file
@@ -0,0 +1,160 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Generate bicorder.txt from bicorder.json
|
||||
"""
|
||||
|
||||
import json
|
||||
import argparse
|
||||
import sys
|
||||
|
||||
|
||||
def center_text(text, width=80):
|
||||
"""Center text within a given width"""
|
||||
return text.center(width)
|
||||
|
||||
|
||||
def format_gradient_bar(value):
|
||||
"""
|
||||
Format the gradient bar based on value.
|
||||
If value is None, show all bars: [|||||||||]
|
||||
If value is 0-8, replace that position with #: [|#||||||||]
|
||||
"""
|
||||
if value is None:
|
||||
return "[|||||||||]"
|
||||
|
||||
# Ensure value is in valid range
|
||||
if not isinstance(value, int) or value < 0 or value > 8:
|
||||
return "[|||||||||]"
|
||||
|
||||
bars = list("|||||||||")
|
||||
bars[value] = "#"
|
||||
return "[" + "".join(bars) + "]"
|
||||
|
||||
|
||||
def format_gradient_line(term_left, term_right, value, left_width=18, right_width=18):
|
||||
"""
|
||||
Format a gradient line with proper spacing.
|
||||
Example: " explicit < [|||||||||] > implicit "
|
||||
"""
|
||||
bar = format_gradient_bar(value)
|
||||
# Right-align the left term, add the bar, then left-align the right term
|
||||
line = f"{term_left.rjust(left_width)} < {bar} > {term_right.ljust(right_width)}"
|
||||
return center_text(line)
|
||||
|
||||
|
||||
def format_metadata_field(field_value, field_name):
|
||||
"""
|
||||
Format metadata field - show value if provided, otherwise show field name in brackets
|
||||
"""
|
||||
if field_value is None or field_value == "":
|
||||
return f"[{field_name}]"
|
||||
return str(field_value)
|
||||
|
||||
|
||||
def generate_bicorder_text(json_data):
|
||||
"""Generate the formatted bicorder text from JSON data"""
|
||||
lines = []
|
||||
|
||||
# First pass: calculate maximum widths for left and right terms
|
||||
max_left_width = 0
|
||||
max_right_width = 0
|
||||
|
||||
# Check diagnostic gradients
|
||||
for diagnostic_set in json_data.get("diagnostic", []):
|
||||
for gradient in diagnostic_set.get("gradients", []):
|
||||
term_left = gradient.get("term_left", "")
|
||||
term_right = gradient.get("term_right", "")
|
||||
max_left_width = max(max_left_width, len(term_left))
|
||||
max_right_width = max(max_right_width, len(term_right))
|
||||
|
||||
# Check analysis items
|
||||
for analysis_item in json_data.get("analysis", []):
|
||||
term_left = analysis_item.get("term_left", "")
|
||||
term_right = analysis_item.get("term_right", "")
|
||||
max_left_width = max(max_left_width, len(term_left))
|
||||
max_right_width = max(max_right_width, len(term_right))
|
||||
|
||||
# Header
|
||||
lines.append(center_text("Protocol"))
|
||||
lines.append(center_text("BICORDER"))
|
||||
lines.append("")
|
||||
|
||||
# Metadata section
|
||||
metadata = json_data.get("metadata", {})
|
||||
lines.append(center_text(format_metadata_field(metadata.get("protocol"), "Protocol")))
|
||||
lines.append(center_text(format_metadata_field(metadata.get("analyst"), "Analyst")))
|
||||
lines.append(center_text(format_metadata_field(metadata.get("standpoint"), "Standpoint")))
|
||||
lines.append(center_text(format_metadata_field(metadata.get("timestamp"), "Timestamp")))
|
||||
lines.append("")
|
||||
|
||||
# Diagnostic sections
|
||||
for diagnostic_set in json_data.get("diagnostic", []):
|
||||
set_name = diagnostic_set.get("set_name", "").upper()
|
||||
lines.append(center_text(set_name))
|
||||
|
||||
for gradient in diagnostic_set.get("gradients", []):
|
||||
term_left = gradient.get("term_left", "")
|
||||
term_right = gradient.get("term_right", "")
|
||||
value = gradient.get("value")
|
||||
|
||||
lines.append(format_gradient_line(term_left, term_right, value, max_left_width, max_right_width))
|
||||
|
||||
lines.append("")
|
||||
|
||||
# Analysis section
|
||||
lines.append(center_text("ANALYSIS"))
|
||||
for analysis_item in json_data.get("analysis", []):
|
||||
term_left = analysis_item.get("term_left", "")
|
||||
term_right = analysis_item.get("term_right", "")
|
||||
value = analysis_item.get("value")
|
||||
|
||||
lines.append(format_gradient_line(term_left, term_right, value, max_left_width, max_right_width))
|
||||
|
||||
lines.append("")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def main():
|
||||
"""Main function to read JSON and generate text output"""
|
||||
# Set up argument parser
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Generate formatted bicorder text from JSON input"
|
||||
)
|
||||
parser.add_argument(
|
||||
"input_json",
|
||||
help="Path to input JSON file"
|
||||
)
|
||||
parser.add_argument(
|
||||
"output_txt",
|
||||
help="Path to output TXT file"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Read the JSON file
|
||||
try:
|
||||
with open(args.input_json, "r") as f:
|
||||
data = json.load(f)
|
||||
except FileNotFoundError:
|
||||
print(f"Error: Input file '{args.input_json}' not found.", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"Error: Invalid JSON in '{args.input_json}': {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Generate the formatted text
|
||||
output = generate_bicorder_text(data)
|
||||
|
||||
# Write to output file
|
||||
try:
|
||||
with open(args.output_txt, "w") as f:
|
||||
f.write(output)
|
||||
print(f"Successfully generated '{args.output_txt}'")
|
||||
except IOError as e:
|
||||
print(f"Error: Could not write to '{args.output_txt}': {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
233
bicorder.json
Normal file
233
bicorder.json
Normal file
@@ -0,0 +1,233 @@
|
||||
{
|
||||
"name": "Protocol Bicorder",
|
||||
"schema": "bicorder.schema.json",
|
||||
"version": "1.0.0",
|
||||
"description": "A diagnostic tool for the study of protocols",
|
||||
"author": "Nathan Schneider",
|
||||
"date_modified": "YYYY-MM-DD",
|
||||
|
||||
"metadata": {
|
||||
"protocol": null,
|
||||
"analyst": null,
|
||||
"standpoint": null,
|
||||
"timestamp": null
|
||||
},
|
||||
|
||||
"diagnostic": [
|
||||
{
|
||||
"set_name": "Design",
|
||||
"set_description": "How the protocol is created and remembered",
|
||||
"gradients": [
|
||||
{
|
||||
"term_left": "explicit",
|
||||
"term_left_description": "The design is stated explicitly somewhere that is accessible to participants",
|
||||
"term_right": "implicit",
|
||||
"term_right_description": "The design is not stated explicitly but is learned by participants in another way",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "precise",
|
||||
"term_left_description": "The design is specified with a high level of precision that eliminates ambiguity in implementation",
|
||||
"term_right": "interpretive",
|
||||
"term_right_description": "The design is ambiguous, allowing participants a wide range of interpretation",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "elite",
|
||||
"term_left_description": "Design occurs through processes that involve powerful institutions and widespread recognition as normative",
|
||||
"term_right": "vernacular",
|
||||
"term_right_description": "Design occurs through evolving, peer-to-peer community interactions in order to suit participant-defined goals",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "documenting",
|
||||
"term_left_description": "The primary purpose is to document or validate activity that is occurring",
|
||||
"term_right": "enabling",
|
||||
"term_right_description": "The primary purpose is to enable activity that might not happen otherwise",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "static",
|
||||
"term_left_description": "Designed to be as fixed and unchanging as possible",
|
||||
"term_right": "malleable",
|
||||
"term_right_description": "Designed to be changed by participants according to evolving needs",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "technical",
|
||||
"term_left_description": "Primarily concerned with interactions among technologies",
|
||||
"term_right": "social",
|
||||
"term_right_description": "Primarily concerned with interactions among people or groups",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "universal",
|
||||
"term_left_description": "Addressed to a global audience",
|
||||
"term_right": "particular",
|
||||
"term_right_description": "Addressed to a specific community",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "durable",
|
||||
"term_left_description": "Designed to be persistently available",
|
||||
"term_right": "ephemeral",
|
||||
"term_right_description": "Designed to vanish when no longer needed",
|
||||
"value": null,
|
||||
"citation": null
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"set_name": "Entanglement",
|
||||
"set_description": "How the protocol relates with participant agents",
|
||||
"gradients": [
|
||||
{
|
||||
"term_left": "macro",
|
||||
"term_left_description": "Operates at large scales involving many participants or broad scope",
|
||||
"term_right": "micro",
|
||||
"term_right_description": "Operates at small scales with few participants or narrow scope",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "sovereign",
|
||||
"term_left_description": "A distinctive operating logic, not subject to any other entity",
|
||||
"term_right": "subsidiary",
|
||||
"term_right_description": "An operating logic under the control of a particular entity",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "self-enforcing",
|
||||
"term_left_description": "Rules are automatically enforced through its own mechanisms",
|
||||
"term_right": "enforced",
|
||||
"term_right_description": "Rules require external enforcement by authorities or institutions",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "analyzed",
|
||||
"term_left_description": "Participants learn the protocol by studying it intellectually",
|
||||
"term_right": "embodied",
|
||||
"term_right_description": "Participants learn the protocol by physically practicing it",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "obligatory",
|
||||
"term_left_description": "Participation is compulsory for a certain class of agents",
|
||||
"term_right": "voluntary",
|
||||
"term_right_description": "Participation in the protocol is optional and not coerced",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "flocking",
|
||||
"term_left_description": "Coordination occurs through centralized direction or direct mimicry",
|
||||
"term_right": "swarming",
|
||||
"term_right_description": "Coordination occurs through distributed interactions without central direction",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "defensible",
|
||||
"term_left_description": "Strong boundaries and protections against external influence",
|
||||
"term_right": "exposed",
|
||||
"term_right_description": "Weak boundaries and vulnerable to external influence",
|
||||
"value": null,
|
||||
"citation": null
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"set_name": "Experience",
|
||||
"set_description": "How the protocol is perceived in the context of its implementation",
|
||||
"gradients": [
|
||||
{
|
||||
"term_left": "sufficient",
|
||||
"term_left_description": "Adequately meets the needs and goals of participants",
|
||||
"term_right": "insufficient",
|
||||
"term_right_description": "Does not, on its own, adequately meet the needs and goals of participants",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "crystallized",
|
||||
"term_left_description": "Content and meaning are settled and widely agreed upon",
|
||||
"term_right": "contested",
|
||||
"term_right_description": "Content and meaning are disputed or under debate",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "trust-evading",
|
||||
"term_left_description": "Minimizes the need for trust among participants",
|
||||
"term_right": "trust-inducing",
|
||||
"term_right_description": "Relies on or cultivates trust among participants",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "predictable",
|
||||
"term_left_description": "Produces expected and consistent outcomes",
|
||||
"term_right": "emergent",
|
||||
"term_right_description": "Produces unexpected or novel outcomes",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "exclusion",
|
||||
"term_left_description": "The protocol creates barriers or excludes certain participants",
|
||||
"term_right": "inclusion",
|
||||
"term_right_description": "The protocol reduces barriers and includes diverse participants",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "Kafka",
|
||||
"term_left_description": "Fosters experiences of absurd complexity, alienation, and powerlessness",
|
||||
"term_right": "Whitehead",
|
||||
"term_right_description": "Enables participants to carry out desired activities with less work or thought",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "dead",
|
||||
"term_left_description": "Not actively utilized by relevant participants",
|
||||
"term_right": "alive",
|
||||
"term_right_description": "Actively utilized by relevant participants",
|
||||
"value": null,
|
||||
"citation": null
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
"analysis": [
|
||||
{
|
||||
"term_left": "hardness",
|
||||
"term_left_description": "The protocol tends toward properties characterized by hardness",
|
||||
"term_right": "softness",
|
||||
"term_right_description": "The protocol tends toward properties characterized by softness",
|
||||
"instructions": "Take all the 'value' fields in the gradients above and determine a mean. Round it to the nearest integer. That is the 'value' here.",
|
||||
"value": null,
|
||||
"citation": null
|
||||
},
|
||||
{
|
||||
"term_left": "polarized",
|
||||
"term_left_description": "The analyst tended toward more extreme high or low readings",
|
||||
"term_right": "centrist",
|
||||
"term_right_description": "The analyst tended toward readings at the middle of the gradients",
|
||||
"instructions": "Take all the 'value' fields in the gradients above. Assess their degree of polarization. For instance, if all the values are either 1 or 9, the output would be 1, and if all of them are 4, the output would be 9.",
|
||||
"value": null,
|
||||
"citation": null
|
||||
}
|
||||
]
|
||||
}
|
||||
147
bicorder.schema.json
Normal file
147
bicorder.schema.json
Normal file
@@ -0,0 +1,147 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "https://example.com/bicorder.schema.json",
|
||||
"title": "Protocol Bicorder",
|
||||
"description": "Schema for Protocol Bicorder diagnostic tool",
|
||||
"type": "object",
|
||||
"required": ["name", "version", "description", "author", "metadata", "diagnostic", "analysis"],
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string",
|
||||
"description": "Name of the tool"
|
||||
},
|
||||
"version": {
|
||||
"type": "string",
|
||||
"pattern": "^\\d+\\.\\d+\\.\\d+$",
|
||||
"description": "Semantic version number (major.minor.patch)"
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "Brief description of the tool"
|
||||
},
|
||||
"author": {
|
||||
"type": "string",
|
||||
"description": "Author of the tool"
|
||||
},
|
||||
"date_modified": {
|
||||
"type": ["string", "null"],
|
||||
"format": "date",
|
||||
"description": "Date when the document was last modified (ISO 8601 format: YYYY-MM-DD)"
|
||||
},
|
||||
"metadata": {
|
||||
"type": "object",
|
||||
"required": ["protocol", "analyst", "standpoint", "timestamp"],
|
||||
"properties": {
|
||||
"protocol": {
|
||||
"type": "string",
|
||||
"description": "Brief name of the protocol being analyzed"
|
||||
},
|
||||
"analyst": {
|
||||
"type": "string",
|
||||
"description": "Who is doing the analysis"
|
||||
},
|
||||
"standpoint": {
|
||||
"type": "string",
|
||||
"description": "The analyst's relationship to or perspective on the protocol"
|
||||
},
|
||||
"timestamp": {
|
||||
"type": ["string", "null"],
|
||||
"format": "date-time",
|
||||
"description": "Timestamp of the analysis (ISO 8601 format)"
|
||||
}
|
||||
}
|
||||
},
|
||||
"diagnostic": {
|
||||
"type": "array",
|
||||
"description": "Array of diagnostic sets",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["set_name", "set_description", "gradients"],
|
||||
"properties": {
|
||||
"set_name": {
|
||||
"type": "string",
|
||||
"description": "Name of the diagnostic set"
|
||||
},
|
||||
"set_description": {
|
||||
"type": "string",
|
||||
"description": "Description of the diagnostic set"
|
||||
},
|
||||
"gradients": {
|
||||
"type": "array",
|
||||
"description": "Array of gradient measurements",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["term_left", "term_left_description", "term_right", "term_right_description", "value", "citation"],
|
||||
"properties": {
|
||||
"term_left": {
|
||||
"type": "string",
|
||||
"description": "Left term of the gradient"
|
||||
},
|
||||
"term_left_description": {
|
||||
"type": "string",
|
||||
"description": "Description of the left term"
|
||||
},
|
||||
"term_right": {
|
||||
"type": "string",
|
||||
"description": "Right term of the gradient"
|
||||
},
|
||||
"term_right_description": {
|
||||
"type": "string",
|
||||
"description": "Description of the right term"
|
||||
},
|
||||
"value": {
|
||||
"type": ["number", "null"],
|
||||
"minimum": 1,
|
||||
"maximum": 9,
|
||||
"description": "Gradient value (1-9 scale, where 1 is left term and 9 is right term)"
|
||||
},
|
||||
"citation": {
|
||||
"type": ["string", "null"],
|
||||
"description": "Citation or evidence for better understanding this gradient"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"analysis": {
|
||||
"type": "array",
|
||||
"description": "Array of analytical measures derived from the diagnostics",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["term_left", "term_left_description", "term_right", "term_right_description", "instructions", "value", "citation"],
|
||||
"properties": {
|
||||
"term_left": {
|
||||
"type": "string",
|
||||
"description": "Left term of the analytical gradient"
|
||||
},
|
||||
"term_left_description": {
|
||||
"type": "string",
|
||||
"description": "Description of the left term"
|
||||
},
|
||||
"term_right": {
|
||||
"type": "string",
|
||||
"description": "Right term of the analytical gradient"
|
||||
},
|
||||
"term_right_description": {
|
||||
"type": "string",
|
||||
"description": "Description of the right term"
|
||||
},
|
||||
"instructions": {
|
||||
"type": "string",
|
||||
"description": "Instructions for how to calculate this analytic"
|
||||
},
|
||||
"value": {
|
||||
"type": ["number", "null"],
|
||||
"description": "Calculated value of the analytic"
|
||||
},
|
||||
"citation": {
|
||||
"type": ["string", "null"],
|
||||
"description": "Citation or evidence for this analytic value"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
39
bicorder.txt
Normal file
39
bicorder.txt
Normal file
@@ -0,0 +1,39 @@
|
||||
Protocol
|
||||
BICORDER
|
||||
|
||||
[Protocol]
|
||||
[Analyst]
|
||||
[Standpoint]
|
||||
[Timestamp]
|
||||
|
||||
DESIGN
|
||||
explicit < [|||||||||] > implicit
|
||||
precise < [|||||||||] > interpretive
|
||||
elite < [|||||||||] > vernacular
|
||||
documenting < [|||||||||] > enabling
|
||||
static < [|||||||||] > malleable
|
||||
technical < [|||||||||] > social
|
||||
universal < [|||||||||] > particular
|
||||
durable < [|||||||||] > ephemeral
|
||||
|
||||
ENTANGLEMENT
|
||||
macro < [|||||||||] > micro
|
||||
sovereign < [|||||||||] > subsidiary
|
||||
self-enforcing < [|||||||||] > enforced
|
||||
analyzed < [|||||||||] > embodied
|
||||
obligatory < [|||||||||] > voluntary
|
||||
flocking < [|||||||||] > swarming
|
||||
defensible < [|||||||||] > exposed
|
||||
|
||||
EXPERIENCE
|
||||
sufficient < [|||||||||] > insufficient
|
||||
crystallized < [|||||||||] > contested
|
||||
trust-evading < [|||||||||] > trust-inducing
|
||||
predictable < [|||||||||] > emergent
|
||||
exclusion < [|||||||||] > inclusion
|
||||
Kafka < [|||||||||] > Whitehead
|
||||
dead < [|||||||||] > alive
|
||||
|
||||
ANALYSIS
|
||||
hardness < [|||||||||] > softness
|
||||
polarized < [|||||||||] > centrist
|
||||
Reference in New Issue
Block a user