Home    LittleBigPlanet 2 - 3 - Vita - Karting    LittleBigPlanet 2    [LBP2] Help!
#1

Tag sensors, how do they work?

Archive: 13 posts


This graph is wrong

http://i600.photobucket.com/albums/tt82/rtm223/waveform%20example/sensorradii.gif

I have discovered that the "dead zone" for sensors with 0 minimum radius is exactly 25% of the way along (2 sensors, 1 set with 0 min radius and 1 set with "25% of max" min radius will give identical values outside of the dead zone). However the relationship between the output and the distance still does not appear to be linear... unless the dead zone is actually a different size and this applies to BOTH sensors with AND without minimum radius of 0...
2011-03-23 19:55:00

Author:
thor
Posts: 388


now I'm confused2011-03-23 20:04:00

Author:
Unknown User


OK here are the results from a sensor set to 30 max range, inverted (the number 30 works out nicely because we should expect ninths (0.11111, 0.2222 etc))


Distance | Output Value
0 | 0.0000000
2.5 | 0.0000000
5 | 0.0000000
7.5 | 0.0010582
10 | 0.1111111
12.5 | 0.2211640
15 | 0.3312128
17.5 | 0.4455026
20 | 0.5555555
22.5 | 0.6656084
25 | 0.7756572
27.5 | 0.8899471
30 | 1.0000000

I noticed that rotating the cardboard the sensor and tag were on changed the last 2 digits (sometimes the last 3).
Occasionally for no apparent reason (just whilst moving things/editing/rewinding) the last 4 digits could change, this was reverted by replacing the tag on the grid, even though it looked as though it was already on the grid and I hadn't changed it.

One can easily see that this is not a linear relationship; if it were, the intermediate points between the "correct" values 0.1111111 and 0.5555555 would HAVE to be 0.2222222 etc. but they are not.
2011-03-23 20:35:00

Author:
thor
Posts: 388


interesting find. so does that mean that it doesn't necessarily work the way it's supposed to?2011-03-23 20:39:00

Author:
Unknown User


interesting find. so does that mean that it doesn't necessarily work the way it's supposed to?

It means it's inaccurate if you're looking for 3rd-decimal-place precision or greater. For things where the output is just making a light brighter, or an emitter faster or something like that, it won't matter one bit. But if you're looking to use this in precise logic, you're out of luck. It means my division calculator is as accurate as it can possibly be being based on a tag sensor to measure distance.
2011-03-23 20:50:00

Author:
thor
Posts: 388


I gotcha...that sucks. maybe you can use a different method...I would say maybe using timers, but timers are just as inaccurate...intriguing2011-03-23 21:00:00

Author:
Unknown User


I wouldn't depend on ANYTHING in LBP2 being absolutely accurate when working with analog signals. This has been discussed to death and the conclusion is that if you want to use analog signals, then you have to build in some rounding to account for the inaccuracies.2011-03-23 21:33:00

Author:
Shanghaidilly
Posts: 153


This graph is wrong

Apparently someone never heard of a sketch graph The 100% and dead zone distance were never supposed to be accurate, the graph was only there to indicate the difference between 0 minimum distance and non-zero minimum distance. I never bothered to test the exact relationship (and never claimed to)
it seems that the relationship is roughly linear, except for the fact that there is a smallish area close to the switch that will always give 100%

Which leads on to some speculation on the possible answers to your question:

Your experiment is fundamentally flawwed in it's accuracy (in terms of placement of objects, or measurement of signals). That you are quoting expected / desired experimental error in the "correct values" to be smaller than 0.0000001 throws up all manner of alarm signals if you are approaching this from a practical scientific standpoint, rather than a purely theoretical mathematical standpoint.
The various floating point calculations of the values in the system (subtraction, squaring, addition, square rooting) are fundamentally inserting error. Nothing that can be done about that.
The cost of calculation of accurate Euclidean distance required to generate an accurate distance measurement is considered too costly to carry out, so some computationally optimised but less accurate algorithm is being used.

1 and 2 feel more likely, and 3 is highly speculative, 'cause I really don't care enough to go out and research a) what candidate methods exists or b) what kind of accuracy they provides c) what kind of accuracy you might achieve if calculating the Euclideans directly. In general, for comparisons of distance in multidimensional systems you would use euclidean squared as it is far more efficient... Obviously this kind of method cannot be done to calculate proportional distance between two points in world space, so a calculation of euclidean distance (approximate or otherwise) would be required to generate a "closeness" value.

Which actually brings me onto another point that is highly speculative and possibly far more practical than any notion of exact accuracy in a fundamentally inaccurate system: If you only need a digital response out of a tag sensor, it's most likely going to be more efficient, in terms of performance, to switch to signal strength mode, as conceptually, signal strength mode should require less calculation. Benefits are unlikely to be significant, but it can't hurt, right? Maybe?




One can easily see that this is not a linear relationship

No. You can't see that. Not in the slightest. What you can "see" is that the values displayed on a probe do not tally up with the experiment you think you have set up. And you certainly can't perceive the actual difference in any tangible way. If you take your results and graph them then I defy you to be able to see that they are non linear, or be able to see any non-linearity in game


I wouldn't depend on ANYTHING in LBP2 being absolutely accurate when working with analog signals. This has been discussed to death and the conclusion is that if you want to use analog signals, then you have to build in some rounding to account for the inaccuracies.

Wise words my friend. Analogue systems have noise - simples.


Getting a feel for such systems and understanding how to pull the most out of the features they actually do provide will give far more benefit that worrying about what they don't provide and fussing over exactly how many decimal places you think you can get. Throughout all of the bashing around of exactly how accurate analogue systems are I can't say I've seen anything that actually helps with any form of practical system design :s Accounting for noise is exactly how it works with analogue system engineering in the real world (once you take things out of the classroom and into the actual real world) and it's no different in LBP - there is a fine art to working with analogue that, for me at least, is far more satisfying than any amount of cold, hard number crunching
2011-03-24 00:30:00

Author:
rtm223
Posts: 6497


Apparently someone never heard of a sketch graph The 100% and dead zone distance were never supposed to be accurate, the graph was only there to indicate the difference between 0 minimum distance and non-zero minimum distance. I never bothered to test the exact relationship (and never claimed to)

I put that up at the top so that people wouldn't think it was correct, even though it is relevant.



Which leads on to some speculation on the possible answers to your question:

Your experiment is fundamentally flawwed in it's accuracy (in terms of placement of objects, or measurement of signals). That you are quoting expected / desired experimental error in the "correct values" to be smaller than 0.0000001 throws up all manner of alarm signals if you are approaching this from a practical scientific standpoint, rather than a purely theoretical mathematical standpoint.
The various floating point calculations of the values in the system (subtraction, squaring, addition, square rooting) are fundamentally inserting error. Nothing that can be done about that.
The cost of calculation of accurate Euclidean distance required to generate an accurate distance measurement is considered too costly to carry out, so some computationally optimised but less accurate algorithm is being used.

1 and 2 feel more likely, and 3 is highly speculative, 'cause I really don't care enough to go out and research a) what candidate methods exists or b) what kind of accuracy they provides c) what kind of accuracy you might achieve if calculating the Euclideans directly. In general, for comparisons of distance in multidimensional systems you would use euclidean squared as it is far more efficient... Obviously this kind of method cannot be done to calculate proportional distance between two points in world space, so a calculation of euclidean distance (approximate or otherwise) would be required to generate a "closeness" value.

I came to the same conclusions as you did... of course there is error somewhere. But it will be through approximation, thereby generating a nonlinear function. In ordinary 32-bit floating point you wouldn't see that kind of compound error just from finding the ratio of distance, even with the square roots.

From a practical standpoint, I'm never going to be able to place objects more accurately than placing them on the grid, so yes it could be that placing an object on a grid square is not exactly 2.5 units, but then measuring distance is still inaccurate. Signal measurement (measuring from batteries) is accurate up to around 7 decimal places.

Your second option is very plausible, but then floating point calculations _should_ be more accurate than that (I just checked - 32-bit floating point calculations are still that accurate even after repeated squaring and square rooting).

The third option is probably most likely. By using an approximation to the square root it saves computation time (not much in this case, but it's probably a general function that's used everywhere) but generates a nonlinear function.


Which actually brings me onto another point that is highly speculative and possibly far more practical than any notion of exact accuracy in a fundamentally inaccurate system: If you only need a digital response out of a tag sensor, it's most likely going to be more efficient, in terms of performance, to switch to signal strength mode, as conceptually, signal strength mode should require less calculation. Benefits are unlikely to be significant, but it can't hurt, right? Maybe?

No. You can't see that. Not in the slightest. What you can "see" is that the values displayed on a probe do not tally up with the experiment you think you have set up. And you certainly can't perceive the actual difference in any tangible way. If you take your results and graph them then I defy you to be able to see that they are non linear, or be able to see any non-linearity in game

Not on a graph you wouldn't see it (though perhaps on a graph that filled an A3 sheet of paper you could). But for more accurate systems such as my division calculator, it manifests as an incorrect calculation beyond the 2nd decimal point. By experimenting with this, I managed to revise my system and gain an extra decimal point of accuracy. Also it could definitely manifest in the form of compouned error, or collision issues with tight-fitting components (you must surely have come across something like this).



Wise words my friend. Analogue systems have noise - simples.


Getting a feel for such systems and understanding how to pull the most out of the features they actually do provide will give far more benefit that worrying about what they don't provide and fussing over exactly how many decimal places you think you can get. Throughout all of the bashing around of exactly how accurate analogue systems are I can't say I've seen anything that actually helps with any form of practical system design :s Accounting for noise is exactly how it works with analogue system engineering in the real world (once you take things out of the classroom and into the actual real world) and it's no different in LBP - there is a fine art to working with analogue that, for me at least, is far more satisfying than any amount of cold, hard number crunching

"Throughout all of the bashing around of exactly how accurate analogue systems are I can't say I've seen anything that actually helps with any form of practical system design" Well here's my first tip: When measuring distance, set the maximum distance in multiples of 4 grid squares (10 units), set minimum distance to 0, and move the tag away from the origin (in the other direction) by the same multiple of 1 grid square (2.5 units). This gives an effective measurement distance of the same multiple of 3 grid squares (7.5 units). For increased accuracy subtract a small amount from the result.

Also it IS beneficial to look at it from a theoretical standpoint, because this isn't signal "noise" it's inaccuracy of the tag sensor itself. Which I now believe to be a result of a square root approximation function. A well-known approximation to 1/sqrt(x) is getting me results in the same ballpark (2-3 decimal place accuracy). This could be the cause of many misalignment problems and physical anomalies in LBP2, at least from my experience, when I put an approximation to sqrt to replace the sqrt in my game engine, many things did not work the same.
2011-03-24 18:25:00

Author:
thor
Posts: 388


But for more accurate systems....
Again, this comes back to whether or not we would consider it wise to try to utilise analogue functionality to achieve "accurate results". It all seems rather meh to me



When measuring distance [change the distance you are measuring first] This is very, very limited in it's applications, you realise? Sure, if you are attempting to make an accurate geometric calculator then marvelous, but for the most [probably] practical applications of distance measurement the modification of the distance is impractical. I'm also not buying the "subtract a small amount" - making something more accurate by modifying it by a "bit", a "smidgen", or a "hair's breadth" doesn't sound very accurate to me In seriousness though, I assume you are talking about tuning using calibration here, not just an arbitrary 'subtract a bit'?

[quote]because this isn't signal "noise" it's inaccuracy of the tag sensor itself. [quote] Inaccuracy in the values produced? Otherwise known as noise. That the noise is introduced at the sensor rather than during transmission (which of course won't happen), does not stop it from being noise. Noise can be introduced at any stage of the system - it can even be completely deterministic in some cases (compression artifacts are a form of noise that would fit this category) Neither does it change the fact that analogue systems are noisy and poor vehicles for design of systems that need to have high levels of accuracy.


The truth really is that most analogue systems won't need any kind of high level of accuracy to be useful anyway. And by systems I mean systems, not components. Obviously a division component that can accurately produce an answer with 4 decimal places is not as good as one that can generate 6, but what exactly is the application. It's not anything that actually relies upon calculating accurate values in a number range given by the levle of accuracy you have with some form of numeric display (that typical misuse of analogue systems and the common argument for why they are untrustworthy), not unless you have a rounding tool as well (which you may, or may not, have - IDK). Becasue if you aren't rounding the values then you are more than likely to end up with compound errors over time anyway (the old 2+2=5 issue).


I'm not knocking the division system you have (which despite being annoyingly physical is rather elegant), or your findings about the nature of calculation of Euclidean distance in LBP, I'm just discussing the issues surrounding the whys of analogue "accuracy"...



Edit, the 2+2=5 issue, for those of you who haven't come across it, goes like the following:

2.4 is approximately = 2
2.4 + 2.4 = 4.8, which is approximately equal to 5
.'. It's reasonable to say that 2+2=5

It's nonsense, of course. But an example of what happens if you display the values rounded, and then try to use the raw values in further calculations. You get something like 2+2=5.
2011-03-24 22:23:00

Author:
rtm223
Posts: 6497


I'm also not buying the "subtract a small amount" - making something more accurate by modifying it by a "bit", a "smidgen", or a "hair's breadth" doesn't sound very accurate to me In seriousness though, I assume you are talking about tuning using calibration here, not just an arbitrary 'subtract a bit'?

Yes, for some reason the distance measured at 0 is ever so slightly more than 0, and this gap between expected distance and measured distance decreases as you approach 1. So some calibration is required to get the most accurate results. I think I subtract 0.0006. Of course linear/piecewise linear adjustments are all that can be made, and a constant adjustment gets it accurate enough.

I hear what you are saying about applications, but tbh I am just chasing numbers
2011-03-24 23:07:00

Author:
thor
Posts: 388


One can easily see that this is not a linear relationship; if it were, the intermediate points between the "correct" values 0.1111111 and 0.5555555 would HAVE to be 0.2222222 etc. but they are not.

Also, I don't think anyone else has mentioned this yet...

I don't know how you're attempting to determine those decimal values, but bear in mind that none of the analog signal probes built so far are completely accurate. The closest is probably tetsujin's (https://lbpcentral.lbp-hub.com/index.php?t=51859-Big-Dumb-Probe), which attempts to take the floating point representation into account, but it's still not exactly correct.
2011-03-25 18:36:00

Author:
Aya042
Posts: 2870


Also, I don't think anyone else has mentioned this yet...

I don't know how you're attempting to determine those decimal values, but bear in mind that none of the analog signal probes built so far are completely accurate. The closest is probably tetsujin's (https://lbpcentral.lbp-hub.com/index.php?t=51859-Big-Dumb-Probe), which attempts to take the floating point representation into account, but it's still not exactly correct.

I used a signal probe, yes, but it seemed to give me accurate values. The battery signals were all accurate to 7 decimal places, and for instance 0.9 measured as ~ 0.89999997 which is roughly what 0.9's value is in a 32-bit floating point format.

Again, things like floating point inaccuracies almost never add up to give errors in the 3rd decimal place. I honestly think it's because of an approximation like:
1/sqrt(x)=(float)(0x5f3759df-((int)x>>1))
2011-03-25 21:15:00

Author:
thor
Posts: 388


LBPCentral Archive Statistics
Posts: 1077139    Threads: 69970    Members: 9661    Archive-Date: 2019-01-19

Datenschutz
Aus dem Archiv wurden alle persönlichen Daten wie Name, Anschrift, Email etc. - aber auch sämtliche Inhalte wie z.B. persönliche Nachrichten - entfernt.
Die Nutzung dieser Webseite erfolgt ohne Speicherung personenbezogener Daten. Es werden keinerlei Cookies, Logs, 3rd-Party-Plugins etc. verwendet.