Next in Wikipedia’s list of Unsolved Problems in Philosophy is the Molyneaux Problem, which asks the following:
If a man born blind, and able to distinguish by touch between a cube and a globe, were made to see, could he now tell by sight which was the cube and which the globe, before he touched them?
As the Wikipedia article itself states, this is no longer a problem for the academic field of philosophy but rather for science (arguments calling science the “natural philosophy” notwithstanding). I don’t know whether psychology has, on the whole, definitively answered this question, but it seems as though the pieces are all in place to allow a scientific and experimental investigation of the problem, leaving we pedants with time to focus on other problems.
Nextly we come to the problem that has made this post take so long to get written. I simply don’t know what to do with it. The Infinite Regression of Justification notices that, with knowledge defined as justified, true belief, we must ask after the validity of any justification, and we must justify a justification. The infinite regress here is clear, and now we must know how to deal with this.
My only personal approach is to deny binaries and absolutism. For me, a fact that is justified down the entire infinite regress would be considered “absolute knowledge.” The reality of such knowledge is up for debate, but it is not material to this discussion: most of our knowledge is not so infinitely justified, and in this view we can, I think, have a better understanding of our knowledge-base if we accept a sort of Popperian critical approach to knowledge and allow that which we know to always be, in principle, mutable. There will be those facts whose justification is so deep and, perhaps, so widely varied (such as the assurance that the floor will be under our foot when we next take a step) that our assurance of its truth is close enough to total as to be effectively immutable, but so long as we recognize that we might just indeed change our beliefs given the right new justification, we can resolve this infinite regression problem.
I’ve written about this problem as a justification for a deep analytical skepticism in the past, but that was not, I think, correct. I think instead we should not use this infinite regression as an excuse to refuse to every know anything, but rather as a justification to recognize the in-principle limitedness of our knowledge. We can then think of knowledge as a web of related facts, models, and theories that lays atop the objective world and closely resembles it, but does not match it exactly. Learning and study and experience, then, which can refine our beliefs and give us more and deeper justifications for individual truths, can bring our individual webs of knowledge into closer resemblance to absolute reality, but an actually exactly accurate map (which may or not be possible) would basically be identical to, and thus interchangeable with, absolute reality.
I know that this is less an attack on the problem of Infinite Regression and more of a loosely-tied-together rambling on related topics, but this is the closest I have to a response to this problem. It is a pain in the ass of a problem, and I think it may be at the core of the epistemological approaches strangling English-language philosophical creativity. I think the philosopher or school that can break this problem and introduce a new and vital knowledge-model will be a valuable contributor to Western thought.
To copy and paste viciously from Wikipedia,
Plato suggests, in his Theaetetus, Meno, and other dialogues, that “knowledge” may be defined as justified true belief. For over two millennia, this definition of knowledge has been reinforced and accepted by subsequent philosophers, who accepted justifiability, truth, and belief as the necessary criteria for information to earn the special designation of being “knowledge.”
In 1963, however, Edmund Gettier published an article in the periodical Analysis entitled “Is Justified True Belief Knowledge?”, offering instances of justified true belief that do not conform to the generally understood meaning of “knowledge.” Gettier’s examples hinged on instances of epistemic luck: cases where a person appears to have sound evidence for a proposition, and that proposition is in fact true, but the apparent evidence is not causally related to the proposition’s truth.
This problem provides for us a nice change of pace, in that the way I will examine this does not hinge on fuzzy categorical boundaries (although I probably could take that route and simply bitch about the category of thought-objects called “knowledge” being fuzzily defined). It seems to me as though such an attack would reduce what can be understood as a fascinating question of experience into a dry semantic debate, and the English-speaking world has been doing far too much of that this past century or so.
So the problem persists. It persists, in fact, as no shallow problem at the periphery of philosophy. It appears to be yet another one of these boundary-conditional problems, yet another one of this issues of resolution of scale; like the progress of scientific models, if we simply hone our thinking we can create a more accurate model of knowledge that derives its creative foundation from the currently accepted “justified, true belief.” But this makes the problem too easy, allowing us a apologetic “it’ll do for now”, and does not allow for the wider methodological space available to philosophy that cannot be approached under a scientific method.
In the first sense, we must criticize those philosophical directions that consider this a prescriptive problem. It is a dishonesty to try to sculpt a definition of what one can or should call “knowledge.” I think it safe to say that even philosophers who approach this problem in that vein will try to bind the scope of their definition into some resemblance of that which is called “knowledge” in ordinary conversation, otherwise the word simply becomes a vacuous word-symbol, and the resolution of this problem can stem clearly and effectively from that definition. Since any legitimate approach will attempt to bind itself to that meaning of the word which is used in daily conversation, it is most honest to simply drop the prescriptive attack and just attempt to describe what knowledge is, and not what it ought to be.
The problem so revised, we are immediately and rather gracelessly confronted with perhaps one of the most inelegant and undying problems in thought: subjectivity. “Knowledge” to me is not necessarily “knowledge” to you, and you and I can both believe ourselves to know facts which are mutually contradictory. How is this resolved? The answer to that question, I would say, is “functionally.” More specifically, in our experience of the world, knowledge is not a static entity, not a simple aggregate of information with some positively correlating operator against the real world, but it is a body of function-serving tissue. The function which knowledge serves is to provide us with a framework for realizing and operating in the world around us.
Gettier’s Problem, then, resolves itself thusly: knowledge is not “justified, true belief” as much as it is that particular reflection of the world held within the individual psyche that re-projects itself back out onto the world, shaping the way the world presents itself to the experiencing agent. With this, we can begin to break down the boundaries of the epistemological and phenomenological questions and resolve them into a single, experienced moment of understanding where knowledge and world meet at the locus of the experiencing agent. In this sense, any belief or understanding which is sufficiently compatible with the objective world as to not cause a cognitive dissonance can be considered knowledge. This allows us to respect and distance ourselves from the individually solipsistic worlds of our neighbors while continuing to exist inside our own solipsisms, while still preserving the integrity of the intersubjective and objective spaces.
With this reconceptualization of knowledge and, more specifically, the relationship between knowledge and the experiencing/knowing agent, we resolve Gettier’s Problem by cutting off the source from which it would have spawned: prescriptive definition.
The problems here arise when we start trying to put clear lines between art objects and non-art objects, or between particular instantiations of an art object and some wholly original piece. This problem is not, actually, in any fashion different from yesterday’s problem of Aesthetic Essentialism. The fact is that these problems occur at the edges of the conceivable whenever we try to explicitly categorize.
“Art objects” as a word-symbol does not actually refer to any particularly definable class of objects. Naive categorization does not hold, and sooner or later we’re going to recognize that this is not just true when dealing with the set of all sets which do not include themselves. The problem here is the problem of vague predicates, the same exact problem that more explicitly rears its head in the Paradox of the Heap.
What needs to be understood is that all natural predicates are necessarily vague, albeit some more so than others. To express more formally: it’s been understood for some time that a word is a symbol that refers to some entity in the world, be it material, conceptual, categorical, etc. There is, however, a fallacy in thinking that the entire world of entities is referred to, in some clearly defined relationship, by a word. Words do not clearly point to discrete spaces, but they more accurately point to regions of a space filled with entities. The borders between regions are simply not defined, much like a hand gesture pointing vaguely “over there.” In the right context, such a vague hand gesture can be quite helpful, and in the context of our ordinary lives, a word’s association with some nebulous ontic cloud is enough for us to get by.
We can, perhaps, encapsulate a large space of that which is considered art with some clear definition, but there will always be boundary problems. At some point, I will clearly articulate the theoretical underpinnings of this inassailable rift between our linguistic and ontic worlds, but for now let this particular argument be observational philosophy: nobody can clearly define art or what makes a particular piece of art because these concepts are not clearly defined in relation to the world about which they speak. For us to place rigid boundaries against them defies the natural meaning of the words, and is a case of philosophers self-interestedly building a world they can understand; this is not our duty. Our duty is to understand the world in which we exist already.
I’m going to attempt to begin a series of attacks on unsolved problems in philosophy. For this exercise, I’ve very carefully pored over the resources available and developed what I believe to be the most refined list of important individual problems. My methodology has been thus:
- Check the List of Unsolved Problems in Philosophy on Wikipedia
- Go down the damn list
My attack on this is crude and unrefined, because it spawns as a reflection of the problem which demands it. Essentialism in any form is a boldly ignorant position. Russell showed the problem with essentialism as it manifested itself in naive Set Theory; Wittgenstein annihiliated essentialism in linguistic categorization; modern empirical psychological results deny cognitive categorical essentialism. The notion of an essential, be it in the definitions of sets, the construction of the universe, or the nature of language and thought is the product of a childishly unorganized thought process.
Back in the philosophical day, according an essential nature to a thing or category was a fine way to organize our thoughts. It was a fantastic tool for slicing our world up into manageable, discussable bits. But we as philosophers need to engage our flow, we need to keep the challenge of understanding the world at a place where it meets our skill, and our skill has surpassed this. We know today that the word-symbol “table” does not refer to some prototypical, ideal table; nor does it refer to some list of features common to all tables; it refers to a nebulous cloud of tables, tables which enter the category of “table” by some subtler mechanism–probably familial resemblance to other members of the category “table.” At the edges, “table” is ill-defined (the line between a “table” and a “desk” is not always clear, is it?), but we should no longer allow that to bother us. We should, instead, remember that the space of words, of nameable categories, is countable, whereas the space of reality is not so simply bound.
This same ill-defining same holds for artistic ventures. The medium of an artistic enterprise can not be accurately or completely expressed with the naming of a category. An aesthetic essentialist would agree that sonnets and haikus are both poems, both with strengths and weaknesses in communicating different ideas, and would thus be categories of media with their own essential expressive natures. But a particular endeavour which is either a sonnet or a haiku would also be a poem. Now, a poem has communicative strengths and weaknesses, certainly, but knowing the strengths and weaknesses of a poem does not give you all the same information as knowing the strengths and weaknesses of a more refined categorization.
I think most essentialist art critics would agree that it is not sufficient to know merely the fundamental physical media of a particular artistic endeavour (because that would simply be a list of fundamental physical particles: so many electrons, so many neutrinos, a smattering of quarks, etc) to develop reasonable judgment criteria. I think they would agree that it is necessary to have more information, to have a more refined sense of the medium. But the medium of a particular work of art is that material which is used to convey the artistic meaning or message, whatever that may be. The most refined possible notion of the medium of a particular work is precisely the work itself. That is, the work of art is precisely those materials which convey the work’s meaning.
To return full-circle: once upon a time, it may have been fair (because our sense was not so sensitively refined and our critical skill not so well honed) to judge a piece by lumping it in with other pieces that use physically or expressively similar material. And it is still intelligent today to contextualize a piece stylistically and materially. But we cannot allow our aesthetic judgment to rise forth from some notion of commonality among certain categories of art, because that leads to paradox and unrefined judgment–a sonnet and a haiku should not be judged by the same standards, but to judge a sonnet as a poem judges by the same standards with which one would judge a haiku. A piece’s aesthetic quality can only be judged by standards arising from itself: from its intent, from its expression, from its form. In some sense, we could consider the essentialist thesis true: individual media do have certain strengths and weaknesses, and a piece can be judged by criteria rising from its medium. But the medium is the piece itself, so this becomes an entirely useless statement: it’s trivial that individual pieces have strengths and weaknesses, and it’s trivial that a piece gives rise to its judgment.
I think that concludes my daily pontification. As I warned you: my attack was crude, but so was the problem. Essentialism in all its forms is naive and immature, and so my attacks against it shall be correspondingly simple and immature: a philosophical “nu-uh that’s dumb.”
There is a paradox in metamathetics that asks the thinker to conceive of “the smallest number not describable in fewer than eleven words.” The paradox arises when the thinker notes that that description itself is ten words long, and thus any number to which it refers is, indeed, describable in fewer than eleven words.
This particular paradox is shown to be a false paradox when it’s pointed out that the number doesn’t necessarily have to exist. That is, any number which it describes would be paradoxical, but if that number doesn’t exist, there is no paradox.
Thinking along these lines, however, engenders thoughts about the whole problem of paradoxes of expressibility. Medieval theologians (such as Cusanus) endorsed the notion of God as a being so great, so Maximum, that complete comprehension of Him is impossible, that we cannot possibly understand God. Similar ideas have been expressed by others (the Taoist notion that “The Tao which can be spoken is not the eternal Tao” comes to mind), but the essential notion is that one is contemplating the uncontemplatable or expressing the inexpressible.
Many observers have pointed out paradox here. This has been taken in a variety of directions, from rationalists using it as justification for a reductio the denies God, to dialetheists using these kinds of problems as justification for a worldview wherein some contradictions hold. But these vectors rely on the notion that this is a true paradox—a notion which does not, I think, hold water.
Let us take the most easily discussable instantiation of this: a concept that is inexpressible. Lao-Tzu wrote that “The Tao which can be spoken is not the eternal Tao.” This line has been translated in ways that are less paradoxical, but this is one of the most famous, and a version with which most Taoists are familiar.
The problem, as has been pointed out, is that this seems to imply that anything said about the Tao is false. If anything said about the Tao is not true, we’ve entered a sort of Liar’s Paradox, where that statement (as well as any of the other text relating to the Tao) is also not true, and therefore of zero epistemic value to us.
The failure, here, is in a confusion of accuracy with entirety. That is, the Tao is, according to Lao-Tzu, something whose fundamental nature, whose entire quality, is beyond articulation or expression. We cannot contain it in words. But that does not mean we cannot contain parts of it in words, or that it does not have some describable qualities.
Such is the case with the Tao. It is impossible to express it directly and entirely. But it is so infinite that it does include certain expressible traits, and thus we are not perjuring ourselves when we claim that the Tao cannot be spoken.
A similar chain of reasoning holds when articulating apparently paradoxical qualities attached to other objects. If we say that God is too great to be conceived of, or more directly if we define God as that which is too great to conceived of, we arise into apparent paradox by pointing out that, in thinking of “an object to be great to be conceived,” we are conceiving of God, and thus are entering paradox. But that misjudges what is happening when we conceive of something beyond conception. When we conceive of an object beyond conception, we’re instantiating a placeholder object and giving, by fiat, the quality of inconceivability. Obviously, this object will be paradoxical, because the only quality it has is its inconceivability, and thus in conceiving of it with that quality, we are conceiving of it in entirety—this stands in violation of that quality, and thus we get paradox. But God, when defined this way, is understood to be MORE than just an unconceivable object; God is Creator or the Absolute Being or whatever else you want to add to God’s totality. But one of God’s qualities is that It is beyond full conception or understanding.
Paradoxes of these sort also resolve when we remember that expression (and even, if you construct it so, conception) is a process of representation. That is, an uttered phrase points to an actual state of affairs or an actual entity; the phrase itself is separate from the actual entity. The words are as a finger, pointing to the moon. We see paradox in these articulations when we confuse the finger for the moon.
Today, Thanksgiving, is traditionally a day of remembrance, family, and friends in the United States. We celebrate our community, our lives, and our families in repast and a break from work. But, in this day of community and remembrance, we naturally look to the past, and this celebration is stained by the bloody history of its tradition.
In school, we are taught that Thanksgiving is a traditional celebration that began in the seventeenth century when the pilgrims at Plymouth Rock came together with the Native Americans and dined in a multi-day feast. Standing in stark and morbid contrast to this is the treatment the Native American peoples received at the hand of the European invaders and colonizers, with millions slain by direct murder and disease. Entire cultures were absorbed into the American leviathan and have vanished in our relentless pursuit of the manifest destiny of the most powerful nation in recorded history.
And so the so-called intellectuals, the liberals, the culturally-sensitive remind us annually
But those of us who remember that history are not the culprits. It is not ours, as individuals, to make recompense for that sin. These are crimes and actions committed by men in a day gone past. The sins of the fathers being visited on the sons is an ethic of a more barbarous era, neither ethically nor culturally fitting in a world with a more refined sense of justice than “an eye for an eye.”
And so the so-called patriots, the conservative, the nationally-proud retort annually.
Now, perhaps it is the duty of the American government to respond with reparations and apology, with the argument being that the government is a continuously single entity, and therefore is at all times responsible for all crimes it commits in the past. I am, personally, partial to this understanding, but I will leave that to other pundits, essayists, and theorists. Today I’m writing about the people, not the government. And the people are right. Both people. Our national and cultural progenitors fucked up. Big time. Our national and cultural history will forever be marred by the insensitive and barbaric actions of the past. But that’s not our fault, nor is it our responsibility to right the past wrong, and to demand that the people of America, the modern privileged Caucasian, pay the piper any price—be that price material sacrifice or emotional guilt—for this misdeed is a demand unworthy of the quality of the mind that usually utters that demand.
However, being reminded of this past does serve to remind us of a responsibility we do have. It is our intrinsic personal and collective responsibility to refuse to allow those kinds of crimes to be committed again. We have an absolute ethical mandate to never allow ourselves—individually or culturally—to commit such crimes again, and a relative ethical obligation to prevent it from happening wherever we see it.
We must remember, though, that this responsibility does not stem from a reparative sense of justice. We do not have this responsibility because we must atone for the past. We have the responsibility because we inherently have this responsibility, as human beings and as social agents. The past can serve to broaden our awareness of the potential space of ethical dilemmas, but that previous-generational past should not have any moral bearing on our current course of action nor on our own personal emotional states.
We are not responsible for the sins of our fathers. The sins of our fathers, however, can guide us in opening our awarenesses to inform action of the now and teaching of our children.
Today is a day of celebration, of community, and of feasting. Let us celebrate, then, our abundance and our family and our joy. Let us not taint our celebrations with the guilt—either received or delivered—of sins that are not our own, and let us be thankful on this day that we are in a material position to allow us to act and think ethically.
Most of us accept, today, that there is a portion of the mind that the individual does not directly control. We have implicit assumptions, ideas, and beliefs that affect our daily lives, whether we are consciously aware of them or not. Most of us who do not consciously view ourselves as racist or do not espouse racist ideals have caught ourselves making inherently racist assumptions when we see someone of a different ethnicity on the street. Harvard’s implicit racism tests provide a striking empirical demonstration of this phenomenon. Most of us accept, today, that the subconscious maintains a degree of control and influence that is beyond our active, conscious awareness.
But the subconscious is not an independent, freely active beast. Our subconscious ideas are formed by our actions, our circumstances, our experiences. Every datum of input that strikes the mind shapes the mind. These data make often imperceptible (and currently immeasurably small) changes to our psychological makeup, and they become self-reinforcing.
Every time someone uses the phrase “that’s gay” as a pejorative, it reinforces the notion that that phrase is, indeed, a pejorative. And even those of us liberally-minded enough to be believers in equal rights for all sexual orientations, those of us who don’t espouse a conscious homophobia, will use that phrase to express a general dissatisfaction with a situation. Every time we use that phrase, we make homosexuality a deeper insult. The subconscious hears this, registers it, and makes it more valid to use again as a negative. When it becomes more valid as a negative descriptor of circumstances, it becomes more negative as a descriptor of human beings.
But we are not total slaves to this subconscious mechanism. Just because we have this predisposition towards using the word “gay” as an insult doesn’t mean that we must use it or interpret it as such. We are not immutable slaves to the subconscious.
It is possible—indeed, even easy—to use that part of the mind of which we are directly aware and which we feel ourselves to directly control to influence the subconscious. Remember: every datum shapes the mind. When someone says “that’s gay” and we make conscious note of the incorrectness of this descriptor, we reinforce the notion that the phrase is not, as it were, an apt description of the undesirable.
A campaign is running now to fight the use of this phrase because of the way it demeans homosexuals. What is, I think, more important to us as individuals, though, is the way that the use of phrases like this demeans ourselves, regardless of our sexual orientations. When we unthinkingly use a phrase like that, we are handing over a piece of our conscious agency to our subconscious apparatus. We surrender part of our freedom, our will, to a hidden agent inside the mind that acts without our mindful recognition and intention. As such, in order to re-assert our freedom from our own minds we must become mindful and aware of our actions, our words, our predispositions. When our casual language and assumptions conflict with our espoused beliefs, we can change those casual assumptions simply by refusing to allow the act to go unnoticed.
This cuts both ways.
Not only can we, through self-mindfulness, attack our implicit negative assumptions, but we can reinforce our positive assumptions. Every time you tell a loved one that you love them, you are reinforcing that love, and you are reinforcing love in general. This extends to the meta: every time you act in a mindful manner, you reinforce your own mindfulness.
This kind of awareness will not change our implicit assumptions overnight. But it will allow us to slowly take back our agency from the subconscious, it will allow us to slowly regain control over our own minds and our own ideas. Every single time we act to intentionally sculpt our own minds, we make a slight change.
The sculpting is not limited to linguistic actions. Ever single datum shifts the mind. It is well documented that certain postures and physical microexpressions express our state of mind without conscious intent of physical display. It has also been empirically documented that the reverse is true: intentionally adopted physical displays can influence our state of mind. So every time we hold our bodies in a posture of aggression, we reinforce our own aggression. Every time we hold our bodies in a posture of confidence, we reinforce our own confidence. Every time we hold our bodies in a posture of inferiority, we reinforce our own inferiority. And every time we hold our bodies in a posture of peace, we reinforce our own peace.
As we become aware of our actions, our words, and our bodies, we can shape our own minds, and through this we can build respect, peace, confidence, love. We can, if we so choose, cultivate a mindfulness of self that allows us, through sheer awareness, to make ourselves better. But remember: every single datum shifts the mind. So when you cultivate peace and respect in yourself, and your body and words start to reflect this state of being, you project that outwards into the minds of others. Your projection becomes their data, and the subconscious apparati of others are affected, changed, sculpted. Through making ourselves better, we can make those around us better. Through the simple act of being aware of our selves, we can become the change we wish to see in the world.
The trend of specialization has been an observable phenomenon since, arguably, the dawn of civilization. With a sufficiently Darwinian outlook, it’s even possible to argue that reality on the whole has a tendency towards specialization From a human perspective, however, it’s clear that specialization has cranked its process up a few notches since the Industrial Revolution.
With the advent of factories and the assembly line, jobs that had previously belonged to cobblers (a speciality unto itself) began super-specializing into jobs sewing together individual pieces of leather and other minute assembly-line tasks. With the dawn of the information age, we’re seeing an even further intellectual specialization where, instead of designing shoes, an individual may only engineer arch-support or improve methods to tool the leather and rubber that makes the shoe. Where once constructions were designed and built by architects or engineers, now an individual engineer may only design and build homes, or roads, or the fire-containment systems for commercial buildings under ten-thousand square feet.
In this flurry of specialization, those fields which affect our daily lives have become too complex for most people to understand. We lament the divorce of the people from the political process. Whole grassroots political movements are developing completely out of a misunderstanding of economic and political issues. This is because today, in order to actually understand a national economy, one must have devoted one’s life to studying economic systems. Today, in order to understand how politics happens, one must have devoted one’s life to the pursuit of political knowledge.
And so the rest of us, those who don’t specialize in economics or politics are left out in the dark, not understanding what’s going on, and we feel it. Some of us react by claiming loudly that it just doesn’t have to be that complex and we mandate a simplification of the system. Most of us react by claiming it futile to even try and by stepping away from the politico-economic process entirely, consciously or unconsciously leaving the state of the nation to the experts, the specialists, just like we do every other domain of deep inquiry.
But today, in this very moment, specialization is beginning to develop a problem.
Innovation is, by definition, a change in the way things are done. To innovate one must see a problem that currently is not solved and solve it. This, by definition, requires being able to operate outside the current confines of a particular field. Yes, it is possible inside most fields to innovate using only the tools and techniques of that field. But it is not possible to innovate using the already existing knowledge in that same field.
More to the point, however, the problems don’t necessarily limit themselves to those that can be solved inside the teachings of a particular discipline. Once upon a time, a computer engineer with a background in materials physics could look at a computer system and say to themselves, “Ya know, I think we can do better.” And maybe they, in their study of physics, had learned enough about particle physics to know that the bizarre realities of quantum mechanics offered a solution that could, in theory, fundamentally open computational power to new horizons. But to do this, to develop this innovation, someone had to leave the bounds of their specialty.
In a world where technical knowledge in a given field of study is roughly doubling every two years, it becomes difficult for the specialist to keep up in their own domain, and nigh impossible for them to understand the intricacies of recent developments in other domains. This is why Stephen Hawking, genius physicist that he is, utters such nonsense as “philosophy is dead” and claims that “philosophy has been overtaken by science.” Dr Hawking understands the literature and discipline of physics, but he does not understand philosophy. He does not understand that philosophy stands side-by-side on the forefront of academic study, he doesn’t understand that philosophy is creating theories of metaphysical reality that parallel theories of physical reality (“Many Worlds” does not mean the same thing to all academics) because it is outside his specialty. It takes someone who generalizes, someone who understands both physics and philosophy in order to connect the two and realize that both fields constantly inform the other.
This is one of the benefits of the University environment. Colleagues from departmentally disparate fields can work together to discuss and solve problems, to teach each other and to share knowledge. More than a few philosophy of science papers have listed as a co-author a faculty member of the philosopher’s institution’s natural sciences division.
But as specialization continues to increase, it becomes harder and harder for disparate specialists to communicate. Even inside a single discipline, different fields find communication to be difficult. If one sits in a classroom teaching continental philosophy and uses the language of analytical philosophy (or, God forbid, vice-versa), one’s point will be lost.
So a new specialty is developing or, rather, re-developing.
It has been said that the age of the generalist is dead. No longer can someone study and contribute to every field of study as did heroes of the past like Hegel and Descartes. No longer can an individual work on the forefront of innovation without some specialization. Even a liberal arts education today is more of an educational pyramid, with individual students still specializing (under the guise of “majors” or “areas of concentration”) in political science or sociology, with only a smattering of cross-discipline education. So the age of the generalist is dead; so it has been said.
This thesis may have held twenty, or fifteen, or even ten years ago. It’s grip is shaky today, however, and in ten years it will have been rendered an obsolescence.
Specialists can’t understand the recent advances in other fields well enough to incorporate those advancements into their study. Specialists in separate fields don’t know how to communicate to each other well enough to generate the degree and fluidity of discourse that once may have been held. But separate fields still must draw on the advances of others in order to continue to innovate, often creating new specialties in the process (computational neuroscience, anyone?). This is where the twenty-first century generalist steps in.
In the twenty-first century, the generalist will no longer be a direct source of innovation. In the twenty-first century, the generalist will no longer be the paragon of thought and education he might once have been. But in the twenty-first century, the generalist will be the force of innovation, the medium of communication, and the wellspring of our future.
The twenty-first century generalist may not know how to combine tablet computing, news media, and the medical research industry in a sustainable fashion. He may not know each field well enough to be able to engineer a solution that is effective, popular, relevant, and profitable. But he does know each field well enough to know that it can be done, and he knows each field well enough to know who to bring together to make it happen.
The twenty-first century generalist will get the astrophysicist and the sociologist to sit at the same table and talk about human society and its place in the cosmos. The twenty-first century generalist will be equal parts engineer, poet, scientist, philosopher, salesman, businessman, con artist and mystic. And it’s this mult-faceted specialist-in-general that will re-mobilize the disaffected masses. It’s the twenty-first century generalist who can see those people that today’s intelligentsia think of as the huddled masses—the idiots pushing this country towards Idiocracy—as the specialists even they are. The generalist will know that construction workers and massage therapists and slacker stoners are every bit as important for the accomplishing of greatness as are the heroic intellects sat aloft in their airships floating above the ivory towers of yesterday’s now-irrelevant elites.
The country, the world, is not headed towards doom. It is not headed towards a sea of incompetence and inadequacy driven by a greater and greater polarization between the masses specialized in their daily lives and the governing authorities specialized in running the show. We are at a cusp point, yes. We are at a place where the tension between the governed and the governing is at a screaming crescendo, but the solution is already developing. The tension will not break this country, will not break the world. The tension is already delivering artists, individual instantiations of a moment of holistic vision that will deliver us from the deepening stream of specialized non-communication. From thesis and antithesis, we have synthesis.
There are seven billion people in this world. The generalist is the air that carries the sounds they make to bring us together to build a future.
I was in the bookstore today looking at the shelves containing my soul, my heart, my dearest love, pure thought expressed under heading “Philosophy” sandwiched (perhaps ironically, perhaps fittingly) between “New Age” and “Christianity” when I realized that the results of thought are no longer useful, entertaining, illuminating, enlightening, true. I realized that now the question is modes of thought of ways of thinking of manners of discover of means to access the truth from our own power.
I thought that Hume had it right–and that article I read—we now know reason to be objectively empty; reason is built from axiomatic assumption yanked from the proverbial ass: FUCK “self-evident truth.” It just doesn’t exist (right, non-Euclidean geometry?). And I’ve known for a while now that continuing to move through the point-A to point-B machinery of logic wouldn’t serve my holistic purposes anymore and now we have to examine slices of reality of different shapes, for different purposes, from different angles and we have to–if we wish to understand deeper and appreciate more and sense greater—unhook ourselves from the bonds of reason and step into a more continuous flow that moves with a great big “bite me” in the face of discrete typographical formalisms.
Not to say that those formalisms don’t have value: they do. They’re gorgeous, they’re wonderful, they’re powerful, but they aren’t the whole damned story, are they? Of course not. Nothing is the whole story. There is no whole story. The whole story is the slice of the story at every moment from every angle summed up into this instantaneous, momentary experience of subjectivity exploded outward into concrete absolutism.
And I realized that to revolutionize thought, to think in new ways, that was the way to access step three or four or n where n is the current stage of philosophia plus one. So here I am taking a leaf out of the postmodern novel and not restricting my motions of abstraction into the logical, the reasonable, the discrete and typographical and just letting go into a flow of thought, the mind-stream as it were or psychosis or whatever you wanna call it, it’s my real.
So we flow meaninglessly perfect from this moment unto the next with fingers hammering out apparent nonsense into the digital world for your consumption if you so choose. But this nonsense, see, it makes sense to me. Or if not sense, at least it begets a manner of understanding, it communicates an idea past that which pure logic in the Aristotelian sense could deliver.
If we want to move forward we have to realize that the logical field has been holistically captured; all that’s left are the internal details. But there’s more to be thought outside the domain to logic and this is where it is, in interpretable, objectively vapid thought-vomit that stews in its own inanity and glorifies itself with its own ironic inconoclasm, shattering meta-levels all the way into the annoyingly transfinite and wishing it hadn’t missed the lecture on induction across the infinite myriad of infinities.
Think with me, my friends, and glorify in your own absolute relevance being achieved through a following through with the patterned scale of your own intellectualism. My “meaning,” my intention should no longer be the source of your understanding of my words or my understanding of your words. Let the postmodernists have their way and we, the philosophers who don’t produce our work under the title of “literature,” shall be freed even from ourselves and that, my friends, that is thought’s next level, next moment, next pirouette into the skies of the absolutely unlimited theory.
If we reject the dual thesis, we deny, by definition, a reality of separation. Whether we do this as monists who positively assert that reality is One or we do it as more literal non-dualists and simply deny that separation is a definitive and absolute quality of Being, we are moving ourselves into a mode of reasoning whose articulation requires reason prior to separation.
But a word is a symbol whose meaning is some separated thing, quality, or action. While understanding this in a hard-and-fast manner is a mistake—a word more realistically means a sort of nebulous cloud of ideas—there is still, by necessity, a separation inherent in the act of articulation. When we cast ideas or trains of thought into language, we are imposing the semantic structure on reality; many non-dualist schools of thought spend considerable effort trying to bring the student away from linguistic thinking.
If then, as is said, “The Tao which can be spoken is not the eternal Tao,” and any articulation is of its nature incorrect, why should the philosopher—as thinker, teacher, shaman, or guru—bother, then, to speak? The answer, I think, lies in the capacity of the mind to abstract.
Immediately and selfishly, the articulating process allows us to see how close we can get. It’s a way of testing our own skill in a game of man-versus-reality, to see what we can, in our cleverness, capture about a non-dual structure in a dualistic system. It allows a game of thinking that hones the skill of articulation like no other, especially if the thinker can hold in mind the pre-linguistic structure of reality during the moment of articulation.
More usefully, however, if the philosopher views a non-dual framework as a way of seeing reality that can improve the lives of others, that philosopher may attempt to articulate the inarticulable in order to help paint a rough picture of the path of thought that should be taken for the student to reach that same understanding.
This seems like an inherently elitist undertaking, and I suppose that in a very real sense it is. The supposition that the speaker has achieved some kind of “secret knowledge” that he or she is attempting to impart upon the listener is without question presumptuous. But if the speaker speaks in recognition of the validity of an infinite array of paths, and speaks with humility in the knowledge that the path which he or she speaks may not be the path for the listener, then I think that the presumption is dissipated, and that the speaker then speaks as a person offering advice to a friend.
Words, then, can become a tool that we can use to shape an idea in the mind of another; we can use them to sculpt a form and to help develop a position that, when the other person reaches out and grasps it, allows the assumption of the positionless position, and steps the person through the gateless gate.
We do not believe ourselves to be speaking truth, because we understand that speaking truth is not something that is done. We believe ourselves to be a finger pointing towards the moon, and we try to remind ourselves and the person to whom we’re talking to not confuse the finger for the moon.