In Philosophy, one encounters more differences in style than differences in content. The advantage of this is that one comes across many ways of expressing similar thoughts. One thought I often try to express is that truth should be understood as revealing, and always bound up with falsity, concealing. Forced to justify my claim against those who would rather grasp truth as a discrete phenomenon (T/F, 1/0), I’m often forced to return to Aristotle and assert that the differentiation of the Logos is the condition for truth and falsity even in intentional speech, before the connection of subject and predicate in logos apophantikos.
This is, however, almost completely useless – as Aristotle has been appropriated both by the Continental and Anglo-American tradition, each in their own purposes. Furthermore, to people outside philosophy, it is difficult to get anyone to believe the premise of the argument: that Greek grammar has anything to do with what we mean by Truth.
Today, it is much more “believable” to talk about semantics as modeling the world, which offers different possibilities for understanding truth as revealing. It is perfectly understandable that models do not replicate the world exactly, and thus hide something about the world. It is also perfectly believable that if they did not hide something, they would have to be identical to the world – the model would simply be a life size reproduction of the world constituted out of the same matter and energy the world is. This model would fail to be reductionist, and thus would be no easier to understand than the world itself – and furthermore since it would have to have the same sort of time as our world, it would offer no predictive power. So, we end up with the conclusion that models are either truth because they are false, or that they are useful because they are false. The difference between “usability” and “truth” originates in the disconnection between ontology and epistemology – if questions about knowledge are no longer questions about knowledge of the world, we can grasp “truth” as the internal coherence of semantics (epistemic – no concern for being), or as the unknowable relation between word and being (ontic – the mere form of truth without concern for what we can know).
It is possible to critique both the semantic and ontic conceptions of truth, but it is difficult because the separation of ontology and epistemology leaves open the retort: “Yes perhaps but we arn’t doing X we are doing Y”. However, I found a passage in Deleuze that might get around this difficulty as it is concerned merely with semantics, with the models themselves and translatability between them. The issue at hand is the ability not simply of a model to show up the world in a particular way, but the possibility of translating one model (semantics) into another model (semantics). On the semanticists truth account, everything internal about a model should be expressible as a set of true/false discrete statements, and therefore models should be translatable into each other: as they are essentially all made from the same stuff (sets of true/false statements). Deleuze talks about the difficulty with this:
“Of course, it is possible to translate into a model that which escapes the model; thus, one may link the materiality’s power of variation to laws adapting a fixed form and a constant matter to one another. But this cannot be done without a distortion that consists in uprooting variables from the state of continuous variation, in order to extract from them fixed points and constant relations. Thus one throws the variables off, even changing the nature of the equations, which cease to be immanent to matter-movement (inequations, adequations). The question is not whether such a translation is conceptually legitimate – it is – but what intuition gets lost in it. “(Deleuze and Guattari, Thousand Plateus, p. 408-409)
Deleuze’s point is that while it is always legitimate to translate one model into another, such a process is actually messy when the models involve different sorts of lines. In a linear model lines are defined by the points which they cross – what is primary is the coding of spaces and the equations that describe movements across these codings are secondary. On the other hand, vortical models like projective geometry or nomad geography hold the line between the points to be primary, the movement, the equations are prior to the codes which they produce.
Think of it this way: imagine two machines. One machine is built in a settlement, and is used for roadbuilding which connects previously existent communities together. The lines, the paths are secondary – they tackle the landscape in order to reach pre existent points. This machine works in linear space – and must oppose the natural line with dynamite for the sake of reaching pre determined points.
On the other hand, imagine a machine which sets out with an indeterminate goal: produce a path, a road, a railway, but it is of no particular importance where to. The line of the route will curve naturally with the sinuity of nature, avoiding mountains, following river valleys. Towns will spring up along the way, but the road does not exist for the sake of these towns – the towns exist for the sake of the road. The towns do not exist in perpetuity but are rather radically contingent on the building of the road, supplying roadbuilders, or extracting easy resources made acessible by the road. Thousands of ghost towns in British Columbia are evidence of this sort of town-road relation.
Of course, it is rare, perhaps impossible that these two machines would ever be completely seperate. No one builds a road with no goal in mind, and no fixed goal completely prevents roadbuilders from following the easier route. In practice, the machines act more like the second kind when the points to connect are very far apart (as in the trans-Canadian railroad). While the end points are determined, and some points along the way, there is much choice about which route the railway will take between those points. The contingency of the path is attested by the fact that it was done twice with drastically different results. The machines are more like the first sort when the points to be connected are predetermined and of great importance. For example, Whistler and Vancouver – in this case the emphasis on the end points determines that millions must be spent blowing up rocks and building overpasses, making the route follow less the line it finds in the mountain and more the fast, relatively uncurved line that high speed automobile travel demands.
On the other hand, we might think of Fort Steele – a ghost town turned living museum in south eastern British Columbia. In this case, the BC Southern railway bypassed the thriving gold rush town in favor of a shorter route and the town declined quickly into abandonment.
Deleuze’s point is that although we can see both kinds of (in this case) route building in every case, that we can’t grasp the first kind through the model of the second kind and vice versa. Rather, each throws out variables from the other, and what was essential escapes.