On Path and Edge Strengths in Fuzzy Graphs

How does one define the strength of a path in a fuzzy graph? Mathew and Sunitha state that “the degree of membership of a weakest [edge] is defined as [a path’s] strength” without any apparent justification. Saha and Udupa mention that several measures, including sum, product, and minimum, all seem plausible, but also ultimately choose the minimum for their purposes (based on a set of axioms related to their problem domain).

However, many methods of combinatorial optimization which operate on weighted graphs assume that the weight of a path is simply the sum of the weights of the edges which comprise it. In order for me to use such methods unmodified, I must define edge weight in terms of edge strength, and define path weight using the sum. I cannot, therefore, define path strength directly, and certain definitions (including the apparently popular minimum-edge one) are impossible to achieve this way.

Naturally, since the weight represents a cost of some sort and smaller weights are more desirable, edge weight should be inversely proportional to edge strength \mu. In FuzzPy, I simply take the inverse of \mu, which I now realize is naïve. In fact, a more correct basic definition would be 1/\mu - 1, so that an edge with \mu = 1 would result in a weight of 0.

A model of pairwise camera field of view overlap in a multi-camera network (fuzzy vision graph) is built of generally intransitive binary relations: if camera A overlaps with camera B, and camera B overlaps with camera C, camera A does not necessarily overlap with camera C, and almost certainly not to the degree implied by a transitive fuzzy relation. This intransitivity is the final nail in the coffin for non-sum-based path strength definitions in my problem domain.

Returning to the definition of edge weight, now in the context of multi-camera networks, there is something missing. Forgive me a contrived example of a problem such as calibration optimization. Intuitively, a path of eight edges each with \mu = 1 is likely to be less optimal than a path of a single edge with \mu = 0.9, because the definition of \mu does not fully encapsulate the pairwise calibration error (not least because the \mu values are normalized). Should edge strength be defined as 1/\mu - 1 + \alpha, where \alpha is some fixed accuracy (or other) cost for a single hop? My aforementioned naïve version (effectively, with \alpha = 1) has been working reasonably well in experiments. However, this extra parameter is not intrinsic to the fuzzy vision graph model, and thus must be defined by the optimization application.

The rationale behind the fuzzy vision graph is that it is a “quick and dirty” model of a camera network’s visual topology, and I think defining additional application-specific things like this α parameter at the optimization stage is appropriate.

Oct 17th, 2009
Tags: ,
Comments are closed.