It may be time to drain a concept that has not been discussed for some time. Let me first explain two things that led me to this conclusion.
Access to large data
Adding “mass access” to #FHIR is a hot topic at Task Force meeting in San Diego. Grahame explained that in his blog. If this is not well explained, use cases to control this API request. No use cases we have only “solution” or a “problem”. In this case, either one of them has an unused solution or one solution ends up properly. The only clue to what the problem is is part of a sentence in the first paragraph of the article Grahame Blog.
“… to support exchanges based on resources, assessments and other value-based services”.
There is no sense in using the state … but you may think that using the state of the health report, getting clinical research data, discovery of insurance fraud is … All kinds of things to do You are concerned about defending privacy, for good reason
There are few, may not be relevant discussions on identification (a term at a high level, which includes alias identification and identity hiding). These are only displayed as a “solution” and when I try to discover the “problems” they are trying to solve, I can find no details. So I’m worried that the solution is useless or the solution is not being used.
I’ve sent this problem to FHIR, FHIR does not need to specify the real parameter. At this point, the solution is to add a parameter to every FHIR query that requests the meaning of the results. The only possible solution for this is the return of an empty package. Since only De-Identify will be removed if you can not use one.
Disposal of identity is a process
Disposal of identity is a process that requires a “state of use” (I need this data for my application. I want this data for my application. I have no targets again to recognize statistical imbalance certainty, etc.). The identification process identifies how data can be manipulated to minimize the risk of privacy and meet the need for case use. IHE has a booklet that goes through the identification process.
All use cases for identity deletion have different requirements. This is almost true, but some patterns can be identified once a particular set of usage cases is identified. DICOM (see Part 15, Chapter E), its mature mature data model (Fhir is not close enough to recognize this pattern) is well done. These patterns are only available to the smallest, even in DICOM.
More detailed usage cases require slightly different algorithms with different acceptance variations. For example, IHE has adopted its own manual on family planning and identification algorithm. Another example is Apple’s sophisticated protection (a mystery form). These are algorithms but not general purpose algorithms. These examples show that each identification algorithm is designated to accommodate the needs of the state of use while reducing the risk of privacy.
After all, the risk of privacy is never equal to zero, unless you delete all the data (empty group).
Separation as a service.
I suggest the service can be specified (http noisy). This service is defined as being installed and therefore available on a pipeline. This is the result of a # FHIR query (normal or compound), which is a package of FHIR resources. This is the identity removal gateway service as well as the Identity Algorithm ID. The result is a data packet that has been deselected for this algorithm. It may be an empty packet where the algorithm can not be met. One of the reasons for their dissatisfaction is that the algorithm requires output to match the quality measurement of the identity loss. Like K-anonymity.
The De-IdentificationAlgorithm algorithm is challenged with the disidentification process.