Evolution of Evaluation

 Evolution of Evaluation

 “Evaluation is a very young discipline - although it is a very old practice.” - (Scriven, 1996) In this chapter, an overview of the global development evaluation scenario is presented. To understand the current scenario, an idea of the historical development is necessary. Hence, it is presented next.

Global Picture

This section describes  how  evaluation evolved as a  field  and important organisations and journals in the field of evaluation at the international level. Most of the published literature in this field comes from United States of America and some from Europe, thus there is a clear western bias in the documentation of history and important organisations and journals.

History of evaluation

History of evaluation is as old as human activity, Humans (a) identify a problem/ issue, (b) devise alternatives to tackle it,  (c) evaluate the alternatives, and then (d) adopt those that results suggest will reduce the problem satisfactorily (Shadish & Luellen, 2005). Shadish and Luellen give examples of earliest documented evaluations from personnel evaluation in China over 4000 years ago and evaluation of Hebrew diet in Bible. Program evaluation is divided into 7 development periods in the western world. First, the period prior to 1900, the Age of Reform; second, 1900-1930, the Age of Efficiency; third, 1930-1945, called the Tylerian Age; fourth, 1946 -1957, called the Age of Innocence; fifth, 1958-1972,  the  Age  of  Development;  sixth,  1973-1983,  the Age  of  Professionalization;  and seventh, from 1983-2000 the Age of Expansion and Integration (Hogan, 2007). In  Age of Reforms,  earliest  documented  evaluations  are  educational  and  production processes.  In  Age of Efficiency, scientific  management  based  on  observation,  measurement,analysis, and efficiency became prominent, objective-based tests were used to determine quality of educational instruction.  In the Tylerian Age, criterion-referenced testing  based  on internal comparison of objectives and outcomes was started. World War II was followed by a period of great  growth  when  accountability  of  national  expenditure  was  ignored,  thus  this  period  is labelled as Age of Innocence.  Till this period,  most literature on  evaluation is on educational evaluation.  In  USA,  with  the  Elementary  &  Secondary  Education  Act  introducing supplementary programs to support education  of disadvantaged students, program evaluation as we know started in the Age of Development. In the Age of Professionalisation, many journals and  university  courses  on  evaluation  were  started  and  evaluation  established  as  a  formal independent  professional  field.  With  increase  in  aid  funding,  in  the  Age of Expansion and Integration, professional associations and evaluation standards were established (Hogan, 2007). In the new millennium, the focus is on capacity development and building institutions for evaluation where organisations like United Nations Evaluation Group and World Bank play a major role. Instead of multiple agencies following multiple standards, there is a move towards a consultative standardisation. This, I am terming as the Age of Consolidation (2000-current). In past few decades, following trends emerged in program evaluation (Hogan, 2007):

·        Increased priority and legitimacy of internal evaluation.

·        Expanded use of qualitative methods  and  a  shift toward mixed quantitative-qualitative methods instead of depending exclusively on either.

·        Increased acceptance of and preference for multiple-method evaluations.

·        Introduction and development of theory-based evaluation.

·        Increased concern over ethical issues in conducting program evaluations and increased use of evaluation to empower program stakeholders.

·        Increased use  of  program  evaluation  within  business,  industry,  foundations, and other agencies in the private and non-profit sector.

·        Increased options that evaluators should also be advocates for the programs they evaluate.

·        Advances in technology, communication, and ethical issues.

·        Modifications in evaluation strategies to accommodate increasing trends of government decentralization and delegation of responsibilities to state/provinces and localities

 

 

  International organisations in evaluation

 In  the  field  of  development  program  evaluation,  few  organisations  are  widely recognised. These organisations, by their affiliation, are the leaders in the industry. These are:

·        United Nations Evaluation Group, a platform for United Nations Evaluation Offices across units

·        Independent Evaluations Group of World Bank

·        International Organisation for Cooperation in Evaluation

·        International Development Evaluation Association

·        American Evaluation Association

·        European Evaluation Society

The first two make the evaluating agencies for a large amount of development aid, while IOCE and IDEAS bring together different evaluation organisations. The last two are academic bodies which bring together the leading evaluation practitioners and theorists in the world. United Nations Evaluation Group was first  established in January 1984 as the ‘Inter-Agency  Working  Group  on  Evaluation’  (IAWG),  a  part  of  UN  consultative  group  on programme questions (CCPOQ). This is a group of heads of UN evaluation offices to discuss system wide evaluation issues. UNEG’s initial work was on designing, testing, and introducing monitoring  and  evaluation  system  for  UN  operations  across  specialised  agencies,  funds, programmes,  and  affiliated  organisations.  UN  Development  Programme  (UNDP),  which funded  most  UN  operations,  provided  the  secretariat  and leadership  for  the  Group.  It  was renamed  to  UNEG  in  2003  (UNEG  Secretariat,  2008).  UN  also  has  an  Office  of  Internal Oversight  Services,  established  in  1994  by  the  General  Assembly.  The  office  assists  the Secretary-General in his oversight responsibilities in  respect of the  resources and staff  of the organization through the audit, investigation, inspection, and evaluation (OIOS, 2018). The Independent Evaluation Group (IEG) is independent of the Management of World Bank  Group  and  reports  directly  to  the  Executive  Board  (IEG,  2018). It is charged with objectively evaluating the activities of International Bank for Reconstruction and Development (IBRD),  International  Development Association  (IDA;  together, the  World  Bank),  work  of International  Finance  Corporation  (IFC),  and  Multilateral  Investment  Guarantee  Agency’s (MIGA)  guarantee  projects  and  services  to  provide  accountability,  course  corrections,  and avoid repetition of past mistakes in meeting the agenda of making the world poverty free.

 World Bank project evaluations began in 1970 through Operations Evaluation Unit in Programming and Budgeting Department. In 1973, it was renamed the Operations Evaluation Department, and became independent from bank management. IFC established an evaluation unit in 1984, and in 1995 the unit increased its independence and was renamed as Operations Evaluation Group. MIGA created an evaluation office in 2002. In 2006 the Board of the Bank Group integrated these into a single unit, Independent Evaluation Group (Wikipedia, 2017). The International Organisation for Cooperation in Evaluation is a UNEG supported moment  that  represents  international,  national,  sub-national,  and  regional  Voluntary Organizations  for  Professional  Evaluation  (VOPEs).  It  strengthens  international  evaluation through the exchange of evaluation methods and promotes good governance and recognition of the value evaluation has  in improving peoples’ lives  (IOCE,  2018). The EvalPartners  group, managed by UNICEF and IOCE, is supported by various partners, including DevInfo, IDEAS, UN Women, UNEG, UNDP, ILO, IDRC, Rockfeller Foundation, Better Evaluation, ReLAC, Preval,  Agencia  Brasileira  de Avaliacao,  SLEvA  and  IPEN,  all  working  together  for  SDG evaluation (Eval Partners, 2017). International Development Evaluation Association was established in 2002 as a global professional association for active development evaluators. It aims to improve and extend the practice  of  development  evaluation  by  refining  knowledge,  strengthening  capacity,  and expanding networks, especially in developing countries (IDEAS, 2018). American Evaluation Association (1986) and European Evaluation Association (1992) were  established,  to  promote  evaluation  use  and  enrich  its  theory  and  practice  in  the  two continents.

 Global Evaluation Agenda (GEA) 2016-2020

 To support monitoring and evaluation for achieving the 2030 Agenda for Sustainable Development,  United  Nations  adopted  the  resolution  69/237  on  19th  December  2014  for “building capacity for the evaluation of development activities at the country level”. This was a step towards building global cooperation for evaluation, year 2015 being already declared as the International Year of Evaluation (EvalYear) at the 3rd International Conference on National Evaluation Capacities at São Paulo, Brazil, in September 2013. The  idea  behind  this  was  to  23  advocate and promote evaluation and evidence-based policy making at international, regional, national, and local levels (EvalPartners, 2016). The  Global  Evaluation Agenda  (GEA)  2016-2020 is  the  first ever  long-term  global vision  for  evaluation.  The  GEA  was  developed  by  many  global  collaboration,  under  the EvalPartners  umbrella.  The  discussions  around  evaluation  capacities  and  capabilities intensified during the Year of Evaluation in 2015, celebrated at 92-plus events around the world. The Year of Evaluation  culminated in a historic global gathering hosted by the Parliament of Nepal  in  Kathmandu  where  the  GEA  was  launched  and  endorsed  by  various  stakeholders including Governments, Parliaments, civil society, and academia, in an atmosphere of global solidarity and partnership (EvalPartners, 2016). EvalAgenda2020 envisions to strengthen the four essential dimensions of the evaluation system, enabling environment for evaluation, institutional capacities, individual capacities for evaluation, and inter-linkages among these first three dimensions (EvalPartners, 2016).

 

 Development Evaluation in Independent India

System of evaluation  was  conceived  in  India  simultaneously with planned economy. With the launch of first five-year plan in 1951, a need for systemic evaluation was felt, and the first plan deemed that systematic evaluation should become a normal administrative practice in all spheres of public activity and for this the Planning Commission (PC) began developing the evaluation techniques by establishing Program Evaluation Organisation (PEO) for independent evaluations  of  community  projects  and  other  intensive  area  development  programmes (Chandrasekar, 2015). From there, India has come a long way over the past 67 years. Dr S. Chandrasekar served as the  Director of Regional Evaluation Office, at Chennai and then as Adviser at Directorate of Economics and Statistics, Ministry of Agriculture, New Delhi. He wrote an  article about history of Development  Evaluation  in India, published as a web  special  by  Yojana  in  November  2015,  around  the  time  when  a  lot  of  changes  were happening in the Indian evaluation scenario. Most of this section is based on his article and a report by World Bank on M&E system in India (Chandrasekar, 2015) and (Mehrotra, 2013).

 Evolution of evaluation institutions in India

The  history  of  institutionalised  development program evaluation  can  be  divided  into following phases, based on how the Government of India treated its evaluation organisations:

1. Planned economy phase 1952- 1973

2. Neglect phase 1973-1995

3. Resurgence phase 1995-2013

4. New institutions and paradigm phase 2013-current

  Planned economy phase 1952-1973

The PEO was  established  in  October  1952  as  an  independent  organisation under the Planning  Commission  to  evaluate  development  programs  implemented  in  the  first  five-year plan and bring out their successes and failures through reports. Over the first four five-year plans, PEO activities expanded considerably and most states established their evaluation units in the sixties, for state level programs for cross-verification and learning in tandem with PEO. The  scope  of  PEO  extended  to  include  plan  schemes/  programmes  in  sectors  of  health,agriculture and cooperation, rural industries, fisheries, family welfare, rural development, rural electrification, public distribution system, tribal development, social  forestry  etc. Later, PEO also evaluated Centrally Sponsored Schemes (CSS) (Chandrasekar, 2015). PEO, a field-based organisation, had three-tiered structure – Headquarters in New Delhi at higher level, 3 Regional Evaluation Offices at middle level and 20 Project Evaluation Offices at lowest level. Beyond these were the state offices, taking the  total  offices  to  40  and  staff strength to over 500. PEO had relative autonomy as all its offices and the state evaluation offices reported to the Director, PEO. The evaluation reports were a major part of annual conference of State Development Commissioners, enabling follow up actions (Mehrotra, 2013). 

Neglect phase 1973-1995

With the reduction in scope of planning commission activities in early seventies on the recommendations of the Administrative Reforms Commission, PEO started its phase of decline and neglect. While the extent of its work was expanded to include urban areas too, its scope of evaluations was reduced to  operational, financial, and administrative aspects of  schemes and programs, rather  than the overall design of  programs and their impacts. It was recommended that only those studies should be taken  up which could be made  available quickly for  use by line divisions.  This  was  accompanied  by  appointment  of  Indian  Economic  Service  Officers, who are generalists compared to earlier subject specialist academicians, as the head of PEO. Internal PEO functions were merged with Planning Commission in April 1973, reducing it to a division  within  a  department  (Chandrasekar,  2015).  Around  the  same  time,  based on recommendations of Staff Inspection Unit of Ministry of Finance, field offices were reduced from 40 to 27 by the end of the seventies (Mehrotra, 2013). PEO featured briefly in latter plans and received insufficient financial layouts, limiting its ability to bring out good reports on time. Its reports were delayed, didn’t cover program impact & design anymore, and were given less important by the concerned ministry thus, the reducing their use. This in turn reduced the number of studies being done (Chandrasekar, 2015). 

Resurgence phase (1995-2013)

 The resurgence in demand for evaluation  can  be  traced to the late nineties,  when the Planning Commission got involved in design and implementation of social safety net programs to counter  the adverse effects  of economic reforms initiated earlier. Unfortunately, the Fiscal Responsibility and Budget Management Act 2003  ensured  that  the PEO and its field offices were highly understaffed. This  began the practise of outsourcing the studies to social  science research  institutes.  The  PEO  involved  the  ministries  and  subject  matter  expert  groups  in ensuring some actions were taken based on its reports from the ninth plan onwards (1997-2002)  The eleventh five-year plan 2007-2012, stressed on building online MIS for all flagship programs. Development  monitoring unit was setup in Prime Minister’s Office in 2009, and  a Performance Monitoring and Evaluation System (PMES) was created at the cabinet secretariat. The  functions  of  monitoring  and  evaluation  were  being  mixed  together.  A  scheme  named Strengthening Evaluation Capacity was launched in 2006-07, to reduce the financial problems at PEO but it did little to address the administrative and staff problems (Chandrasekar, 2015). During  this  phase  of  resurgence  in  demand  for  evaluation  activities,  mixing  up  of monitoring and evaluation, ignoring plight of PEO, underutilisation of studies, and outsourcing to private institutions without clear policy, were a few grave mistakes  made.  As  a  result,  in 2012, there were 6 regional and 8 project offices left (PEO, 2012). 

New institutions and paradigms phase (2013-current)

A new Independent Evaluation Office was established in the 12th plan with a mandate to “conduct evaluation of plan programmes, especially the large flagship programmes to assess their  effectiveness,  relevance  and  impact.  It  also  has  the  freedom  to  conduct  independent evaluations  on  any  programme  which  has  access  to  public  funding  or  implicit  or  explicit guarantee  from  the  government.”  Instead  of  using  regular  organised  services  available  to government, it proposes to get evaluation done by selected institutes and researchers identified through  tender  processes  (Chandrasekar,  2015).  Not  much  is  known  about  how  IEO  was expected to function and how it was different from the PEO. With the change in regime and dissolution of Planning Commission in 2014, PEO and IEO  have  been  merged  into  Development  Monitoring  and  Evaluation Office (DMEO) in September 2015. In 2017, most field offices were shut down and staff was attached to DMEO at New Delhi (Indian Express, 2017). Even less details are available on official websites about this office compared to PEO (and IEO). The PMES started earlier is now replaced by Pragati dashboard  for  direct  follow-up  by  PMO  for  better  implementation  but  this  misses  any  opportunity  for  evaluations  based  on  the  Results  Framework  documents  prepared  by  the ministries (The Economic Times, 2015).

 Concurrent evaluations

In the resurgence phase, concurrent evaluations were being regularly done by ministries themselves for their programs. For example, National Food Security Mission under Department of Agriculture  and Cooperation, Ministry of Agriculture was  carrying out its own concurrent evaluations in 2010 (NFSM Cell, 2010) and Ministry of Rural Development had a Concurrent Evaluation Office (CEO), set up for  managing  Concurrent  Evaluation  Network  (CENET) of MoRD, in conjunction with IEO. The CEO was closed in July 2016 (PIB, 2016). Concurrent evaluation is  either a formative  or process evaluation, which evaluates all the activities carried out to achieve program objectives, annually. Concurrent evaluations have been  done  in  the  past  too,  an  example  is  the  concurrent  evaluation  of  Integrated  Rural Development  Program  carried  out  by  Department  of  Rural  Development,  Ministry  of Agriculture in 36 districts of the country since October 1985 for at  least a year. As  ordinary evaluations in that era were usually ex post facto, they did not provide remedial measures and mid-term collections, a need for concurrent evaluation was felt. (Saxena, 1987) The  term  concurrent  evaluation  isn’t  common  outside  India,  where  the  term  self-evaluations is used for internal, regular evaluations (UNEP, 2008).

Current Scenario

 Past decade has been very eventful for the evaluation systems in India. IEO was set up and closed, PEO was closed, Results Framework Diagram based PMES was started and closed and DMEO has been started recently. This section captures the current scenario

 

 DMEO at NITI Aayog, New Delhi

 While Development Monitoring and Evaluation Office (DMEO) has been established in 2015 and NITI Aayog has a very functional and updated website, very little information is available  about  it,  in  the  Digital  India  age.  The  little  information  available  is  from  a  few newspaper articles and telephone book of NITI Aayog. While the 2016 contacts document mentions 7 regional DME offices and 8 Project DME offices, the 2018 document mentions no regional or project offices (NITI Aayog, 2018). This change is also hinted at in news in 2017 which mentions that the 15 offices are being shut down and staff called to headquarters in Delhi (Indian Express, 2017). In the  current  set  up,  DMEO  has  a  Director  General at helm, a Joint Secretory,  two Deputy DGs, an under Secretory and staff attached to their offices. On the Technical/ specialist end, there are a few senior Research Officers, Sr. Statistical Officers, a Senior Consultant and many Economics Officers, Consultants, Research Associates and Young Professionals, a total of about 25-26 people. There is some administrative staff as well (NITI Aayog, 2018). In 2016, DMEO called for Expression of Interest by Research Institutions, NGOs, and universities for carrying out evaluation studies. While this call for EoI is available online, the final list is not found on the NITI Aayog website. As per mandate of DMEO, it is expected to get evaluation studies done as requested by various ministries for their programs. This is similar to what PEO and IEO were doing.

Evaluation in Indian states

Evaluation  was  an  integral  component  of every  state’s  planning  and  implementation process while PEO was blooming. States have taken varied path in past few decades from there. While Evaluation is reported just as an activity under the Directorate of Economics and Statistics in Planning Department in most states, Karnataka has an Evaluation authority, in Goa and Sikkim, Evaluation is in the name of the directorate. When we look at the official websites, we  see  that  evaluation  occupies  important position  in  many  states. 

 It is seen that  across  the  states,  evaluation  is a function generally under the Planning Department, which has the Directorate of Economics & Statistics, responsible for all statistical data collection, analysis, and in most states, for monitoring and evaluation functions. Most of these functions started during the third plan period (1961-66) (PEO, 2006). Outsourcing of evaluation studies to competent agencies has been going on for a couple of  decades  and the websites,  developed  in last  10  years mostly,  show  records of processes carried out by various states since 2012-13, under the 12th Five-year plan. Unlike Maharashtra though, very few states refer to the UN guidelines in their empanelment Process. Records  of  how  the  feedback  generated  by  these  studies  is  used  is  poor.  Program Evaluation  Organisation  had  brought  out  one  study  in  2004  and  another  in  2006  titled Development Evaluation in PEO and Its Impact (Vol I and Vol II) which summarise the follow up  actions  taken  based  on  the evaluation studies  done  in  the preceding years  (PEO,  2006). Beyond this, not much is documented

Evolution of Evaluation

  Evolution of Evaluation  “Evaluation is a very young discipline - although it is a very old practice.” - (Scriven, 1996) In this chapter...