Credit scoring has never been used to evaluate Charities, only lenders and companies.
I have been addressing the question so far as an “either or situation”, but actually discriminant analysis and credit scoring give philanthropists a continuum score, which allows us to distinguish the very best charities, the well run charities, the medium managed, the under-par, and the hopeless, those that philanthropists should avoid at all costs.
We use 42 different measurements to define our charity scoring equation, which is much more data points than those normally used for profit companies, because charities are much more difficult to evaluate, a fact that is already known to most philanthropists.
One of the reasons being, as I have already mentioned, the lack of effective metrics as profits or share of market when it comes to charities.
However that should not deter us; it only makes the analysis tougher, calling for identification of surrogate proxies for variables such as efficiency and profitability.
From a statistical point of view, this only means that the variance of our classification of charities is somewhat broader than that of companies.
The probability of misclassifying a charity as an AAA when in fact it is only an AA is higher than when we are classifying senior debt of listed companies.
We had to use more indirect measures, such as frequency of board meetings.
We know for a fact that boards that meet every month are more efficient than boards that meet every six months, one of the criteria that discriminate between good and bad charities, in terms of management.
The 42 measurements are basically divided in 6 big areas.
1. Administrative effectiveness.
3. Financial stability
4. Quality of administrative controls
5. Legal compliance
6. Public recognition.
Every single one of the 42 variables or performance measurements is quantifiable; there is not a single subjective measurement in the process.
This is what makes the selection scientific. Anyone using the same criteria and the same set of data would come up with the same selection of charities.
Compare this with the usual selection process for NGO awards, in which normally 10 representatives of the charity sector meet for an afternoon cup of tea and select 10 leaders or 10 projects from a list of previously submitted entries.
Depending on the mood those 10 members wake up in the morning, the results could be completely different.
Have the same committee members choose again six months later, after a bout of amnesia, and I will bet the results will be completely different.
That is why peer review based prizes never attain the same credibility as Olympic Games prizes, which are always based on measurements, such as minutes or distance throwing.
Peer review prizes invariably are tarred with cronyism, favoritism, injustices which blemish the prize winner.
Credit scoring charities gets away from all this subjectivity.
One can criticize the criteria used, one can question if the number of variables should be 50 or 55, but one cannot question that the outcome was subjective.
We criticize our criteria every year, and drop some, and include others if the statistics demand it.
The Best Run Companies and The Best Run Charities, the two selection process I conducted for 30 years, never received an accusation of favoritism, or cronyism.
One of the common types of criticism was that we did not develop sector specific criteria, for health and education for example.
This is tougher to contest but basically we were trying to classify different charities, in different fields, and that requires common denominators.
Nor are sector specific evaluation criteria easy to determine objectively. Education may require a 30-year span of evaluation, for one to be able to evaluate if it was indeed effective.
The gist of our evaluation is management, transparency, information flow, and the premise that well managed charities will be carrying out what donors expect them to do, whatever the mission is or specific field the charity is in.
By the way, stock analysis follows the same basic rule.
No Wall Street research house or broker test drives GM cars, or Procter and Gamble diapers, before issuing a BUY recommendation.
They basically analyze management and the financial structure of these companies.
Another issue is refinement of existing measurements or introduction of new ones, non sector specific ones.
Over the last 10 years we have perfected the evaluation process and refinements were introduced as new measurements.
What we always wonder is whether the new refinement or variable really adds value to the process?
More often than not, the new classification or list of charities comes out practically the same, identifying say 49 out of the original 50 charities.
When you already have 42 variables, the next one usually contributes very little, for two reasons.
We could actually give the award using only 17 variables, with practically the same degree of confidence as using 42.
But we like the redundancy put because we are treading on new ground.
Secondly, well run charities usually maintain a standard of excellence in everything they do.
If we were to create a totally new evaluation criteria, chances are we would find a similar degree of efficiency as in all the rest.
If rooms are clean, chances are bathrooms and kitchens also are.
The best part of the Award is the Award giving itself.
It has a very emotional ceremony, devoted volunteers and managers break up in tears and for many this is the first public recognition in years, many for the first time in their lives.
For Profit companies have the annual distribution of dividends as their reward, charities have nothing. The “Premio Bem Eficiente” is a coveted and well deserved award for those that receive it and a landmark in the history of philanthropy.
May I add, this may be the most cost effective project in the world, It costs US 300.000 a year, and has returned US$ 1.000.000.000 in additional donations over the ten years.
(Lido por 134 pessoas até agora)