previous in series || table of contents || next in series

THE CONCEPTUAL FRAMWORK :
THERE IS NO PANACEA

by
Dr. Hector R. Anton
Partner, Haskins & Sells
December 7, 1976

[Introductory Note: Dr. Anton is partner in charge of the Accounting Concepts Department at Haskins & Sells and is a member of the Accounting Standards Executive Committee of the AICPA.

Dr. Anton has taught economics and accounting at several major universities including the University of Minnesota, the University of Washington and the University of California in Los Angeles and Berkeley. While at Berkeley, he was chairman of the accounting faculty for five years and Associate Dean of the Graduate School of Business Administration from 1969 to 1972.

He has also been a visiting professor at Victoria University, New Zealand, the University of Chicago, Fulbright Professor in Finland, and held a Wallenberg Foundation grant at the Stockholm School of Economics. He is the author of several books and his published articles have appeared in leading accounting journals both here and abroad.

Dr. Anton earned his bachelors and masters degrees at UCLA and his PhD in economics from Minnesota.

The topic of the lecture comes in response to the continuing search for more acceptable accounting models, particularly in the wake of a growing perceived dissatisfaction with the historical cost model. A plethora of models have emerged. The more narrowly drawn models lack applicability while those which are more general lose reliability. This and other observations lead to a conclusion that no information model can ever be developed that would be totally responsive. Conven-tions must be agreed upon from among a family of uses and purposes. Hopefully these will serve for a significant period of time.]

I am pleased and honored to be here with you tonight honoring our good friend, Emanuel Saxe. Actually, Mannie is the one that is distinguished, and not the lectures, at least not my own. Mannie, my very best wishes to you. I also feel a certain kinship with this school, one that I discovered rather lately -- the school and I were launched in the same year. And, despite the fact that we have encountered some rough seas, we're still sailing along. I'm also glad to be back here in an academic atmosphere where it is well understood that whatever I have to say here is my own personal view and does not necessarily reflect any position of Haskins & Sells.

In the search for a conceptual framework, the construction of models is not sufficient, let alone models that are bastardized by incorporating elements from a number of theories. If one is going to construct a model, one should construct a nice neat model that stands on its own. Then only the premises can be attacked. What then is my thesis? We have been in search of a cohesive coordinated body of theory for a long time, but that search has been accentuated within recent years by widespread dissatisfaction with the historical cost model. That dissatisfaction, as you well know, is widely shared by academics, financial executives, independent accountants, economists, Congressmen and others. Obviously, dissatisfaction with the historical cost model has been fueled by world wide inflation, investor consumerism and an accelerated rate of change.

As a result we have seen a great plethora of models -- one of which was given at the last Saxe lecture. Each model has strong proponents; and each model, frankly, is in search of a measurement system that makes sense. However, each model is also flawed (any model is flawed for that matter) because of a dilemma. That is: The narrower the basic premises and objectives, the tighter and more reliable that model will be; but then it suffers from narrowness in applicability. Those of you who were here at the last lecture will attest that Dr. Chambers has a very nice model. Frankly, given his premises, there is nothing you could attack. However, his premises were flawed in that the only things he was interested in, essentially, were cash flow and current liquidity, thus his concentration on current exit prices. His model, therefore, avoided or negated all other kinds of considerations, including investment possibilities. Well, if one broadens the premises and objectives, then a loss in general applicability will result and the model becomes less useful in making individual decisions. Therefore, it is not surprising that most of the questions addressed to Dr. Chambers had to do with the fact that the audience didn't like his premises, or didn't like his view of what the output from that decision model would be. Dr. Chambers, of course, found fault with the conventional historical cost model because it did not give him certain things that he wanted -- such as being able to predict that no cash could be obtained from a half-built work-in process inventory.

This general observation should not really surprise us, because thorough study of measurement and information theory leads to the conclusion that technically, let alone semantically or at the influence level, no information model can ever be developed that would be totally responsive. That's it, we're just not going to get there! Therefore, conventions (compromises if you wish) must be agreed upon from among a family of uses and purposes. Hopefully, these will serve satisfactorily at least for a significant period of time -- a time during which those measurement standards can be held constant, before enough people get dissatisfied with it and then want to go on to something else. Therefore, what we need is, frankly, arbitrary. No academic likes to hear that word, but we do have to have arbitrary conventions about users, about their utilities, about their decision functions, and about the actions users need to make. Based on these compromises certain information is required of the accounting model, then we have to make decisions as to how we can use the information, and so forth.

These conventional agreements, of course, will involve choices from different theories because general acceptance involves people who may agree about some things only if others compromise their positions. Some of these compromises, incidentally, would have to be political. Also, a few nonaccountants have been watching the current scene, with the result that some politics have entered into decisions about accounting rules and standards. Those choices, however, have to have certain good qualities. One essential quality, and one that we ought to be striving for constantly, is that the output of the accounting model is capable of being transformed by the individual users into information useful to them in unique circumstances. That's my thesis, in brief. However, I want to support these assertions by reference to some of my previous work as well as some of the work that others have done.

While I caution against the optimistic view that any model or conceptual frame-work will be a panacea and solve our problems "forever," I do want to offer a systematic way of analyzing accounting information models in such a way that we can evaluate these models' strength and weaknesses. But, more importantly, regardless of the model, that we can determine what kinds of assumptions about externalities, that is factors external to the model, need to be made in order to make a model work, in order to make the model viable, and if we can, to determine what the cost of such assumptions might be. In order to do so, and I want to be just a bit tight in the formulation, I will draw on certain aspects of other related disciplines -- measurement theory, communication theory, language, information economics and behavioral theory. In brief, I don't want to stop at any arbitrary "accounting" boundary, conventional or otherwise. I'm also going to make assertions that can't be supported here, today, but which are supportable either by my own writings or others'. What I want to do here is simply to sketch a blueprint of the kind of relevant questions that can be raised in model building; and perhaps give some direction to our study of the conceptual framework of accounting.

First, a brief word about measurement and five points to keep in mind for later use: One is that we measure only because we want to be precise about something in a specific use. My favorite example is of a yard. For my general purposes half my arm's span is within half an inch of a yard. Generally, that is enough precision for what I need. However, a standard is set under conventionally specified but rigid conditions, measured in such a manner that the standard will be useful in some other totally different circumstance or environment from that in which the standard is set. The standard yard, believe it or not, is a unique metal rod held under specific temperature and atmospheric control at the Bureau of National Standards, but we need to use the idea of the length of the specific metal rod in order to measure under other conditions.

Any measurement is an agreed standard. It's "manufactured" if you wish. The need for the precision, however, is set by how we use that particular measure.

Secondly, we have to set the state for the measure, that is we have to set the environment under which that measurement would take place. Obviously, if I'm talking about a yard I can't go and get that standard rod out of its vacuum case, but I do have certain transformations or adaptation rules that enable me to make a measurement in a particular case. In other words, we need first a standard, then a specific environment that we want to measure, and thirdly, we need to develop some adjustment rules. Those adjustment rules are important because they are devised to make the standard applicable in a wide variety of circumstances. We can then use the measure, not only here and now, but tomorrow or years after, and still be able to bridge both time and space in making the measure.

The fourth step is one that generally everybody considers to be the measure but is only one step in the whole process: A number or other symbol is applied. This is called numeration. Finally, we have to use the symbol in a specific action. Some information theorists, among them Henri Theil, go so far as to say that no information exists unless that information influences a desired action. Keep those conditions of measurement in mind as I develop my thesis.

Next, we have to know something about language. Language in some ways is a kind of a measure, but in other ways it is different. Language is a general code that enables us to communicate adequately from one person to another -- again bridging time and space. For example, even though I read it poorly, I can read Chaucer -- who in turn never heard of me or America. And, the other day I met a Japanese professor who studied flow of funds from my book 15 years ago. Language bridges time and space, and obviously, that is also what is needed in the measurement process. The aim of language is generality, again to improve the chances of utilization. We want a language broad enough so that we can communicate with many people and, at different times -- which also is one of the aims of accounting. We want to communicate broadly, but as the language or code becomes more general the dilemma arises: It loses its ability for fine distinction. For example, all of us who have studied economics know that even though economics texts appear to be written in English, that those economics books are really written in Economese. We also know that the man on the street does not make any sense of them. (Perhaps they do not make any sense to economists either, but that's another story.) At any rate, as the code becomes more general, it loses the characteristic of being useful in precise situations. As special cases recur, specialists need to be more precise, and before long a metalanguage develops.

To a certain extent, all professionals and scientists use metalanguages to communicate better with each other (and sometimes, it is claimed, to keep the rest of the population ignorant). If metalanguages develop, then the general audience loses the capacity to understand what they are saying. That is a problem that constantly has to be faced and will continue ad infinitum. Society keeps developing various sets of languages and metalanguages for special purposes. In order to understand the various languages we build redundancy into the system. A dictionary, for example, helps by defining some words that are unknown in terms (words) of others that are known. Using a dictionary permits one to be more precise, but it takes time. At the same time, of course, some learning takes place and if enough learning is achieved, the person is no longer in the "general" audience but has become part of the "precise" audience.

The important thing to stress here is that many people who read economics or accounting or other special books will look at the English words and believe they know what they are reading, but without actually knowing. That extends also to such things as financial statements, and that extension requires us to guard very carefully against superstitions. A superstition is simply perceiving something falsely. In the situation above, what is written may not be what one perceives in the reading. So, guard against superstitions both in measurement and in language.

The next step is tougher because it introduces a little-known concept, entropy -- that is the measure of information content in mathematical communication theory. That basic premise is used to measure the capacity of the information source, the communication channel, or the communication receiver. Simply stated, experts in communication theory, information scientists if you wish, have been able to quantify information such that they are able to make general decisions about the information system. That is information is capable of being measured in a technical sense; we have the means for measuring the amount of information in the source, the capacity of any communication channel to transmit that information, and also the capacity of the receiver to receive and perceive such information. To parallel the accounting system, the information source would be the business environment, the information communication channel would be the accounting and reporting system and the receiver would be the user of the reports.

In physical and statistical terms, entropy is measurable. Information theory is well developed and has many applications in a great variety of areas. What we need to know here is that entropy is a concept that is measurable, and that any message in any information system (such as accounting) has limits on what can technically be achieved. In other words, there are physical limitations in these areas. The capacity of the channel (that is, the accounting system) and the capacity of the receiver (the user) set limits, when considered in relation to the magnitude of the information source, that cannot be overcome -- they just simply cannot be overcome.

Unfortunately, those capacity relationships result not merely in selective loss of information, but also in increased confusion. Three examples that one can relate to are: (1) If a TV channel gets too strong a signal, or one signal too many for its capacity, one does not get part of the picture, most of the time the result is what we call "TV snow." I know you all have experienced that. The TV snow that you see on your TV receiver is the result of too much signal for its capacity. Secondly, and I'm sorry it didn't snow today, you have heard of "snow jobs " -- a use totally unrelated to TV snow, but representing the same phenomenon. A snow job results when somebody feeds you too much information, deliberately. In that case, if you don't understand a thing of what's going on, and if you want to appear that you do, then you use the "superstition" and "get snowed." A third sample, is this particular part of the lecture.

The mathematical theory of communication also defines conditions as to what in, and what goes out, of the system. Encoding is needed to input information into a system, and decoding is needed to output information. Unless one knows what code was used in encoding, one cannot decode. Unless the conditions or measures of the input are known (I'm going to talk about standards a little later), there's no way that one can know (cognitively) what comes out. If one does use that output without knowing the code, again one will be using it as a superstition rather than as a fact. There is ample literature in this area. Other aspects of information are also relevant to what I have to say, such as costs of information, the economics of information, the relationship of the cost of information to outcomes, etc., but these aspects are not central here.

Let's see if I can summarize the elements. In the simplest terms, first I have talked about the existence of some state of the world -- the environment (source) as it relates to the activities of the business unit. Secondly, in order to make use of that state of the world, and that is really what accounting is all about, we want to transform that real state of the world into abstract symbols -- the data that, can be put into an information system and which later can be output. In order to do that, a decision must be made as to which of the activities and states of the environment are to be put into the particular system. Immediately, we need selection standards. In a sense the perfect accounting system, of course, is the whole world. (There are some models, incidentally, that cannot be modeled -- that is the only model possible is the real thing itself.) Third, the selected items must be encoded into a suitable language that is susceptible to measurement (data bits are used in information theory); recall, assignment of a number in the discussion on measurement. These, then, must be structured in such a way that the channel, here the accounting system, can be consistent with the capacity requirements of both the environment (source) and the user. In accounting we use a simple two-bit basic code -- zero or one, debit or credit, in or out, negative or positive -- but those two little bits have been structured in such a way that significant parts of the whole world are encompassed. You know, that's really a marvelous thing.

Processing and other data manipulations take place within the system. Here those who want to study information seriously will study cost of information and information economics. There are costs of summation, there are costs of mathematical processing -- even arithmetic, there are costs of classification, there are cost of selection, and there are costs of aggregation. These manipulations are done for a variety of reasons, but mostly to enable more efficient output from the system.

A partial state of the world is now encompassed in a little black box, but it is useless unless something can be retrieved from that blackbox. What is needed? Another policy decision, this time selection criteria are required to guide us as to what should be retrieved, and how and when. That selection results in some form of output which, in turn, must be decoded in order to make the abstract data relate to real state of the world considerations. Finally, given that an action relative to that information must be taken in order to affect the environment, the cycle is complete and another starts anew.

We can now clearly focus on many problems. If, as many information theorists claim, no information exists unless it can affect a decision, and if, in order to construct a system, the actions that need to be influenced must be known (as well as who's going to take the action, and when and how), these output considerations must be built into the input considerations. However, general systems must be built because we do not know who the decision makers are going to be, or at what time they are going to take the action or under what circumstances. A general system obviously cannot accomplish the task perfectly, so certain kinds of compromises are needed. One thing is clear, actions need to be known at the input state.

Secondly, even one individual will have many activity decisions, not one but many, and individual input and output selection standards may very well conflict. There has been some work done on conflicting objectives and conflicting standards (by Balderston, for one), but we're not very far along in that area. At best, a system tailored to one individual may be designed, let's say, by asking the individual to state his preferences and behavioral assumptions and working on them. But getting an individual to state preferences frequently results in erroneous signals because individuals do not communicate very well. Therefore, inferences from observed actions are frequently drawn as to what the preferences really are, etc. Even if we can determine individual preferences at a given time, we know that any individual can change them easily; sometimes without rationality or apparent rationality or explainable circumstances.

Thirdly, to have generality many users must be reached. Combinations and permutations of desired actions then become very large. If one individual has many actions that he can take over his life-time, you can imagine the number that many individuals will have. Therefore, to achieve any kind of rationality we must reach agreement by convention. (No academic likes to say that because it sounds too much like the opprobrius "real world" kind of thing -- the "other world.") Agreement by convention must be achieved so as to serve potential users and their potential actions as they arise. That agreement has a side characteristic: If one agrees on a set of actions or a set of standards, automatically all others are excluded. That's kind of tough. You can't have a nice, narrow, precise model and come out with nice, narrow, precise answers without excluding and ignoring all the other things that happen. The potential users, then, really have to be considered "standard users" in the true sense of the word standard. The actions that they take, or the decision functions ascribed to them, will have to be standard actions or decisions. Superstitions must be avoided, otherwise, again we're lost. Following measurement theory, if we have standard users, standard activities, standard functional relations, etc., then we must have some means of following the rules of measurement. That is, we must be able to adapt from the standard case to the unique case in the unique environmental situation. Frankly, we have been weak in pursuing that kind of information both as academics and as professional people.

Given conventional agreement as to users and actions, standards for processing within the accounting system, and standards for input selection and measurement will also require widespread agreement. As we all know people don't agree for very long.

Conventions are transitory. It is no real surprise that we may be unhappy with the historical cost model. That model, although it appears as if it has been with us forever, has really not been dominant except since the 1930s. Prior to that, we had current cost models, single entry, and mixtures of different things. When conditions change again, we'll undoubtedly become dissatisfied with whatever model is agreed upon now. As the state of the world changes, as economic payoffs change, the relative power of different users changes, and then there will be pressure to change the input and the output measurement standards -- and especially to change the focus as to what should represent standard users and standard actions.

Well, I've been rather abstract, and now to pin it down. Let's see if I can make it a little more relevant to today's conditions. First and easiest, there has been some talk of changing the basic structure -- that is double entry bookkeeping. Double entry has been known for over 600 years; Paciolo wrote in 1492 but by that time it had been used sporadically for 150 years. However, double entry did not come into prominence until the second half of the last century. In fact, there are some classic 19th century books, debating the issue -- I remember one by Cronhelm, with the interesting title Single Entry by Double. At any rate, since the argument in the mid-1800s as to whether or not we needed double entry, there has not been too much talk about doing away with double entry. There has been some talk about adopting quadruple entry; German scholars have advocated a form of quadruple entry, and others, especially computer experts, have been talking about n tuple entry for some time. No one has done much about those speculations, so abandoning double entry is not too serious an issue. Therefore, we cannot expect channel capacity to increase very much from that direction. More serious attention has been given recently, and you might look for that kind of thing in the conceptual framework, as to whether or not there should be articulation between the income statement and the balance sheet. As viewed by some, it could take the form of a step back toward single entry. However, the problems that are posed by articulation can probably be solved through other means.

Secondly, the state of the world certainly suggests extended pressure on the present conventionally-agreed upon model which has been paramount since the 1930s. Two major influences are inflation and volatility of foreign exchange rates, and heavier taxation with its consequent impact on capital formation. Frankly, it is too early to tell whether any agreement can be reached on any one alternative to the present model. The FASB's general purchasing power approach and the SEC experiment with replacement cost information have been more cursed than blessed and seem to have created more problems than solutions. On top of that, no general standards are available for either one. Other models posed by individuals have not had much general support. In brief, they either narrow the objective too much as in Dr. Chambers, or are essentially nonmeasurable as in Edwards and Bell. Given the institutional penetration of the present model, that is historical cost permeates practically everything we do in the business world today, one wonders whether a change from that model will ever be economically feasible short of runaway inflation.

Thirdly, analysis of users (and recall that current focus is on users not on input, although past models have been built on input characteristics, i.e., what could be recorded, rather than on output characteristics, i.e., what information is needed in decision-making) indicates there are some marked changes taking place relative to the users themselves. There is less personal control by individual shareholders. We are faced to a considerable extent by, not even absentee ownership, but by stockholders that don't behave as owners. To a considerable extent stockholders have been atomized. The major concern of stockholders appears to be quite changed from what it was 20 or 25 years ago. There has been a trend towards being in and out of the market, towards looking at one's portfolio, and away from either loyalty to a company or keeping tabs on one single company over a long time. Secondly, the impact of the institutional investors has not yet been comprehended. The influence of these institutional investors, such as trusts, pension funds and ESOP's, etc is growing. As some of you may have read recently, Peter Drucker is turning his attention to that area and will probably popularize it. Drucker forsees some drastic social changes as a result of that trend, and obviously, he will indicate paths for other students to study in greater detail.

The "transient" investor has tended to focus on current results rather than on long term averages. The emphasis on current results which is reflected in FASB statements, obviously, has been accentuated by litigation and by significant settlements. You can't have it both ways, you can't have emphasis on cash flow to the current stockholder and also look at trends that erase abberations over time. You can't both have current emphasis on depiction of the economic impact of foreign exchange fluctuations and volatility such as we have had recently, and at the same time say, "Let's look currently at the trend of the earnings and not on the impact of those fluctuations." That's the whole problem with FASB Statement No. 8: some people are still looking at the financial statements having one objective in mind, while the financial statements are based on a premise going in other direction.

Recent academic interest, especially from the finance field, has focused on efficient markets and portfolio theory. The strength of such influence, and its impact on conventional agreements about our accounting models, is not yet clear. In terms of the above analysis, in order for efficient markets hypothesis to be important, users would need to desire changes in market prices as the action. That's what those models are measuring, impact on changes in market prices as opposed to having information to help determine whether to hold, buy or sell, or indeed whether owners would be better off or not in the long run. Market behavior would have to be conventionally accepted as an accounting goal.

Similarly, George Benston's hypothesis that SEC control is ineffective, and perhaps not needed, is based on a premise (perhaps warranted, perhaps not) of a desired market result -- that is, the impact of the SEC on market prices. A predictive market view of accounting could be constructed if such information were required, much as SEC control is required whether or not the market follows. So again the question is whether conventional agreement may be reached, and whether these relatively narrow views could or should be paramount.

Just a few words about the Objectives Study, and what's likely to be adopted. Most of us have seen the FASB's draft on the Tentative Conclusions. We should look at those objectives to see whether they are likely to lead to measurement standards that, though general, are susceptible to adaptation to individual unique use. We can focus on two objectives that are likely to be adopted by the FASB. One is that information is useful to present or potential investors and creditors in making rational investment decisions. Here the FASB is implicitly addressing a general set of investors and creditors. The narrowing down to a set of "standard" investors, including a set of "standard" creditors, and also some sub-set of "standard" users that may make rational investment decisions. Adopting that objective may lead us in paths that will help us develop models -- asserting the objective by itself does not.

Secondly, it's clear that the FASB will assert that these users need information to help them assess the prospect of receiving cash from dividends, or interest, and from the proceeds of sale or redemption of the security. Primarily, they propose to do that by evaluating the enterprise's ability to obtain cash through operations and financing activities. In turn, the prospects of the user receiving such cash flows will also be affected by the perceptions of investors and creditors (somehow these will add-up!). Investors must determine how the enterprise's ability to obtain (net) cash could affect market prices of the enterprise relative to those of other enterprises. So while we have an objective to present accounting information that would be useful in making decisions rational decisions, we don't know what that information is.

Further, it is proposed that those rational decisions will be forthcoming from the prospects of the enterprise being able to pay cash dividends; and also, of course, from the stock market prices upon the redemption or the sale of the security. Notice that a transformation function has been introduced here that is not part of accounting: it is a transformation function that is based on the perception of investors in judging the enterprise's ability to affect market prices of its securities. We know next to nothing about that transformation at this time, and there is no way accounting can comprehend it, but at least the tentative objectives focus on it. Again output information comes from the accounting system, which in turn comes from the real world, representations of which must be encoded in a standard way that must be known, so that when the output is used it cannot be thought to represent something else.

The FASB is apparently not convinced that financial accounting can measure an entire enterprise, nor are they convinced, on the other hand, that cash receipts and payments during a short period such as a year would adequately indicate whether performance is satisfactory. To buttress the objectives the FASB has proposed the use of certain qualitative characteristics which include relevancy and materiality, substance over form, freedom from bias, (everybody has bias, every measure has bias, the best we accomplish is freedom from bias given a standard use or a standard measure) comparability, consistency, and understandability.

The objectives and characteristics are still thought of in the abstract, the test will come when attempts are made to make them concrete, and when these characteristics and objectives conflict. There are conflicts, frankly, between the perception of a creditor or the perception of an investor, and both are central to the Tentative Objectives. There is nothing in the Tentative Objectives, for example, that would help us to determine whether the present model or any one of the alternatively proposed current value models would be a better one over the others. Such a choice would still be dependent on evaluating the transformation function between reported enterprise data (Objective 3) and the individual stockholder's ability to adapt from that to their ability to predict receipt of cash (Objective 2), or to adapt such information to market prices (Objective 2b). The activity model which I presented to you in broad outlines clearly brings out those relationships.

In summary, technical constraints limit our ability to use ideal models. Instead conventional agreements have to be reached as to standard input, standard systems, standard output, standard users and standard uses. A major task (which I hope all of your PhD's will attack vigorously) is to develop and provide adaptive rules that will make accounting results usable in individual circumstance. Any conventional system, remember conventional means agreed upon, that does not provide standard output that is susceptible to such adaptation would not have conventional support for very long. The conceptual framework would be very helpful, and I hope that it will be helpful for at least 12 years.

SELECTED QUESTIONS AND ANSWERS

Question:
Could you elaborate a little bit on the models that you are describing, particularly the more abstract ones I missed? The reference, if there was one, to the kind of feedback control that you would hope to find in the information models?

Answer:
I'll give you a couple of references: Jerry Feltham's book, Joe Demski's book, O.K.? Very obviously, any model of this sort constitutes a feedback loop; a great number of loops as a matter of fact. Although I didn't get into such things as a channel with memory, or the structuring of the channel and so forth, the feedback itself can help structure the information. Obviously, these models are almost organic in the way they behave. The mere fact that we can't get the kind of information from the conventional accounting model that some people, not all people but some people, want is itself fed back into the structure of the system itself. The mere fact that we take investment decisions of some sort and they prove wrong is also fed back to improve the system.

As to the needs of users, and user interaction, next year is the year to look at related parties, joint ventures, Addendum to APB Opinion No. 2, and off balance sheet financing. Perhaps 1978 will the year. In the meantime, there are all kinds of work going on, though not really on a concerted basis. I think we probably know far less about the needs of the users of municipalities' statements than we know about the need of users of the corporate information, which is little. But it's coming. I don't know if that was responsive to your question or not.

Question:
In terms of your comment that decision making is a political process, as opposed to a logical process, wouldn't that be another reason why on the agenda of Objectives the nonprofit institutions would have to be lower on the scale? Remember back when Mayor Lindsay was mayor, one of the rating agencies cut New York City's rating; they went up to a Senate Committee and said, "We have to start investigating these rating agencies." If, a rating agency cuts a rating of a profit--making corporation, they wouldn't try to do that. They wouldn't have any chance of succeeding. So that's one of the reasons why they have to be on a lower scale.

Answer:
Thank you. I hope that you don't go away with the impression, however, that I said this was not a logical process. Rather, it is a political process, but certain words in the English language have certain nuances that I don't like, so I'm only trying to stress that we cannot construct models that can be used simply in practice. We've got to have conventional agreements, but these conventional agreements, I hope, will be based on logic. Though I may not have accomplished it, I thought I had a very logical structure: from the actions that need to be taken, to the user, his perception, his transformation functions, back to the output, from the output back to standard rules, to the channel, from the channel through all machinations back to the selection process from the real world. That is a very logical process, and I would hope in future developments that we don't become irrational.
I use the word transformation to get a better relationship to the standard, and adaptability rules, and so forth. Feedback is implicit in these models, as in all information models. Some very elaborate theorems have been developed as to how this feedback interacts, not only in the selection process, but also in the very basic input process. How do we select what we input into a system?

Question:
Dr. Anton, I'm concerned about your comments with respect to a nonprofit organization, or the current emphasis being placed on municipalities and the need for measurement and disclosures there, Would you care to comment about the applicability of models to these nonprofit or municipality type accounting situations?

Answer:
Let me just be general and abstract to start with. I kept shifting back between accounting models and information models, etc. because the whole process is the same. Now the process of arriving at conventional agreement about the different models may very well change in the future. Up to now we have tended to stay within the same general model for all types of entities. The Trueblood Committee did propose 12 objectives, one of which had to do with nonprofit and governmental units. As nearly as I can tell at this point, the FASB is not doing anything relative to that area. They are essentially saying, if you wish, that that standard group is a little too far removed from the others, and that perhaps it will have to be studied in more detail later. As I recall, in my article relative to Objectives in last year's Journal of Accountancy, I made the point that perhaps at this time the FASB would be better off, as far as efficiency is concerned, by not trying to build the same model for everything. At the same time, I fully understand that information requirements for users of information from municipalities and other governmental units are every bit as real as investors' need for information about a corporation. This need is becoming more and more important. To use Tom's feedback notion, as this need is posed, more and more attention is going to be paid to it.

As to the political nuance, any agreement, any conventional agreement, or the process of arriving at that agreement is a political process. I did not mean to infer that this was something that only politicians do, we're all in a political process of some sort when we try to arrive at conventional agreements that in turn may become standards.

Question:
Dr. Anton, would you comment on the work of the Moss committee, especially as it relates to the continued role of the FASB in the development of accounting principles?

Answer:
Well, never having spoken to Mr. Moss, although he's a fellow Californian, I don't really understand, that's a good word, I'm ignorant of what the particular process ... of what his particular objectives are -- whether it was an in-progress report or whether he had other aims in mind. I do think, very honestly, that inspection of the whole process is warranted. We're all doing it; we're all in one way or another working with them.

Frankly, without having a lot more information about the Moss report, what I could say probably would be a superstition. I do have great faith in people, and I think that in the long run any investigation, I guess that's what they call it since they're an oversight committee for the SEC, is bound to look at the constructive things as well as those that need mending. So I'm not pessimistic at all about those actions.

Question:
Dr. Anton, just to support your point about the political process, and to defend the concept of the political process against those who think it in nonquantitative, I'd like to make two suggestions. One, the theory of the Edgeworth-Box is a very simple illustration for political process; a political process being one of negotiation between two parties, both parties of which are under their own realistic constraints. I discovered a more recent source in a recent journal of regional science (the Regional Science Association which meets alternately in Europe and the United States publishes these papers), I came across the name of Peter Nijkamp of Holland. His concern was a multivariable decision process concerning the building of polders of land salvaged from the sea by building dykes, into which you have to work in economic and political considerations and quantify them. I don't have the citations at hand, but I'm going to be using it in my paper and I think that those that are interested might look it up.

Answer:
The fact that we talk about political processes does not mean that we talk about arbitrary positions and arbitrary values. To emphasize his point we're all building darns of one sort or another, and they contain a lot of different facets. You know tile FASB process is a political process; it is almost a legislative process in the way it behaves. The end result of it, however, is that the board's decisions will come out as standards, and then as they are usable and achieve the desired ends they result in general agreement. The process of the FASB is just a little bit more formalized than the APB's, but in essence, there were "political pressures" on the APB as well as there are political pressures on the FASB. Again, I think it's good that we all have our input and our day in court, as it were. I hope all of us are going to be constructive about it, and help those fellows do the very rough job that they have.

Question:
In terms of your talk, I wonder whether you might want to comment on two things that have occurred to me? One is whether the search for the sort of perfect conceptual framework, or a better conceptual framework as we saw it in the True-blood Committee is to develop a framework that can be used to narrow arguments? If you wait until you have the last word, you won't have the first word; we've now spent three years doing this. And secondly, and related in terms of your notion of encoding and decoding, I wonder whether standards building is really what we ought to be doing at this point, or whether we should really be concentrating our efforts on slaying superstitions? In other words, because as you make the encoding more complicated, you make the decoding more complicated, and more costly. I wonder, therefore, whether you probably create an environment that will create more superstitions, that is more wrong interpretations than what the data are? My perception is that, presently, we have too many superstitions, and perhaps more of the effort should be devoted toward slaying superstitions rather than building more complicated encoding devices.

Answer:
I think your observations are well founded. I think that the cost/benefit argument is one that probably can never be solved -- that there's no way of making that animal hold still. We're not going to model this process simply in order to look at it and to study it. This is another case where the real world is probably the only model that we can derive. But rather than passing on it, let me just say that I think that the cost of it (standard building) is probably not all that significant relative to other consideration. Secondly, collectively we are not looking for perfect or ideal models. I'm just saying they can't be built and, therefore, that we have to adopt, if you wish, a system that will approximate useful models for a period of time. In that connection, I think what the Trueblood Committee did was quite worthwhile. (Since you spent a lot of time on it, and I didn't, I think it was quite worthwhile.) I also think it's quite worthwhile and necessary that the FASB has spent three years on the Conceptual Framework; frankly, the Conceptual Framework is another attempt to enable us to look at things from a general viewpoint given certain constraints about the rules. The thesis repeats itself -- if we know what those constraints are, and what the output is given those constraints, they are adaptable. Adapting from rules is a little easier in use, so I think it will be worthwhile to have the Conceptual Framework. At least it will quiet some people that have been saying, "Don't look at this problem until you have the Conceptual Framework." At the same time, I very, very much want to stress that when the Conceptual Framework is finally with us, accept it, adopt it, and put it into standards. Remember the way I use the word "standards." It is not an end, all it is a relative measure to be used in another context with adaptive rules. I hope that we may be able to use it well, as I said, for perhaps twelve years; although experience tells me, once it gets in, it will be used, perhaps as a superstition for maybe ten more years. The Conceptual Framework is no panacea. I think that no one from any group, the AICPA, the FEI, the FASB, the government or anybody else, should hold his breath and say that these problems are going to be solved simply. If we do agree on a conceptual framework, however, it will give us boundaries within which we can operate.

You can get all your Ph.D. students working on this problem, on developing adaptive rules therefrom. I think that's the important thing. We have it in measurement, in spades. You know, in fact, the only thing that "scientists" have over us "social scientists" is that they are more precise with their adaptive rules. They are more precise in their measurements because they do their jobs a little narrower, and they have better adaptive rules. Their rules are such that they can replicate their experiments, not all the time, but generally most of the time.


previous in series || table of contents || next in series

Newman Library Digital Collections