Our Services
A wave of new legislation, both foreign and domestic, mandates the testing of prediction algorithms for algorithmic fairness in the form of impact or conformity assessments before their deployment. Fairlogic evaluates predictive models for risk of algorithmic bias directed at persons based on their protected class status (i.e., race, ethnicity, sex, religion, disability, age, national origin, etc.). In assessing for bias, Fairlogic uses a suite of fairness metrics carefully selected by our experts.
Conformity Assessments
In the event algorithmic bias is detected, Fairlogic determines the type of mitigation technique to be applied (pre-processing, in-processing, or post-processing), and implements the mitigation process via a suitable mitigation algorithm on the model.
Bias Mitigation
The selection of metrics is a critical part of conducting assessments. At Fairlogic, we situate a model within its proper context. This involves understanding its functionality, the specific methods used for implementing the model’s tasks (e.g., classifier, recommender), assumptions underlying data integrity and collection, persons or groups that may be negatively affected, and whether there are particular metrics prescribed by existing laws and regulations.
Fairness Metrics
Fairlogic can assist organizations in developing risk management and compliance systems that align with the standards and recommendations developed in NIST’s AI Framework as well as ISO’s 42001 for implementing and maintaining safe and trustworthy AI systems. These standards are becoming incorporated by numerous pieces of legislation as legal requirements.
Risk Management

Contact us
Interested in working together? Fill out some info and we will be in touch shortly. We can’t wait to hear from you!