Home    LittleBigPlanet 2 - 3 - Vita - Karting    LittleBigPlanet 2    [LBP2] Help!
#1

Why sequencers don't work for me: logic-heads, please read

Archive: 18 posts


I've been involved in a lot of discussions on testing for negative or zero analog values. In an effort not to further derail those posts, I finally worked up a test circuit that demonstrates the issues I've been having using sequencers to test for analog values. Here's the circuit:

http://i7.lbp.me/img/ft/d4706b01b7d0e95e44d95317d375af2879dbc8b5.jpg

The top left battery is set to 90. The second one is set to 30. Basically, the combiners are subtracting 30 from 90, then taking that result and subtracting 30 again and then one more time. (90-30-30-30=0). I've taken the output and it's displayed in the probe on the left. Because it's left with a -1 digital value, I use the bottom left circuit to subtract 0 analog, 1 digital from this using another combiner. The right hand probe displays this value (zero).

It's not shown here, but Balorn's probe also shows a result of zero analog, zero digital.

When fed into a sequencer set to positional and a battery that is stretched across it's length, it actually returns the value in the battery (as displayed by the led bulb lit up). And yes, I tested the sequencer by feeding a battery set to 0 into it, and it works as expected. I also tried feeding the value from the subtraction directly into the sequencer, without removing the digital component, with the same results.

What I discovered is this. If I use two combiners in any circuit on its way to a sequencer, things work as expected. For instance if I set the first battery to 100, and just subtract 50 twice. However, more than two combiners result in the sequencer always giving a value, regardless of the signal input.

Anyone out there understand what's going on here? This is driving me nuts.
2011-02-23 14:19:00

Author:
Shanghaidilly
Posts: 153


Well, combiners act weirdly in more than one way, for example the top combiner in your picture is giving an output of 60, and still the cable looks like it's giving a 0; also, the bottom configuration will always give a value different from 0, even with the battery at 50%, which should give an output of 02011-02-23 14:42:00

Author:
Shadowheaven
Posts: 378


Well, combiners act weirdly in more than one way, for example the top combiner in your picture is giving an output of 60, and still the cable looks like it's giving a 0; also, the bottom configuration will always give a value different from 0, even with the battery at 50%, which should give an output of 0

I concur, but RTM mentioned in his blog that when working with analog signals, the lighting of the wires doesn't give an accurate representation because it represents only the digital component. And yes, the bottom component gives a different value from 0, but it's only supposed to be for the digital component (50 analog, 1 digital, minus 50 analog, 0 digital). That gives a 0 analog and 1 digital value. That's where things get weird.

Here's the original circuit I was working on when I discovered this. The thing to look at is the middle probe, which is hooked to the output which is going into the small sequencer in the bottom right. It reads zero analog, zero digital, yet it causes the position on the sequencer to be non-zero.

http://i6.lbp.me/img/ft/6f68bbe1fe6e2d2e89292b42dce34a1c097eee9d.jpg

Oh, and look at the top probe. It reads negative zero analog, negative 1 digital. If this one is hooked into the bottom probe, which is more accurate, it reads exactly the same. So there's a possibility that this value is actually something smaller than can be displayed using the accuracy of the probe. (The probe only shows accuracy to 1/100th).
2011-02-23 15:00:00

Author:
Shanghaidilly
Posts: 153


Yeah, I don't think the processing of analogue / digital separately counts as weirdness.


There was a bug in the Beta where subtraction of analogue signals (specifically the 100-i from a NOT gate) would throw some inconsistency into the mix when activating batteries on microchips. For example, if you put 10 batteries on a microchip and put in signals at 10, 20, 30 etc. the signal activates the battery to the left (which is annoying, to say the least, rounding down at that boundary point is not entirely desirable, but at least it seems comvenient). However, if you put a NOT gate on that and activate the microchip, you find that 10 [100-90] and 20 [100-30] both activate the same microchip. Which sucks.

Anyways, I think particular case has been fixed in the full build, but realistically, I wouldn't trust any analogue signal test that requires things to be exactly equal, which includes exactly 0. Testing your signal being in the range +/- 0.1% is probably fine and IIRC you can make a sequencer big enough to do that without any upscaling of the signal first.


You almost certainly do have a signal that is negative and smaller than the probe can read.

Your system is working fine - the output result is 0... ish. Which is probably the best you can hope for
2011-02-23 15:12:00

Author:
rtm223
Posts: 6497


Your system is working fine - the output result is 0... ish. Which is probably the best you can hope for

HAHAHA. An rtm-ism at it's best.

Thanks, I just needed someone to concur with my suspicions. Now I just have to figure out how to test for <1.
2011-02-23 15:36:00

Author:
Shanghaidilly
Posts: 153


If you could add a few digits of precision to that readout, you would find that the value is not actually zero, but really darn close to it. If you want I can publish my meter which can give a more precise reading on analog signals. It nicely illustrates why what rtm describes (100-90 and 100-80 both activate the same part of the sequencer), and it would show the non-zeroness of that signal.2011-02-23 18:44:00

Author:
Tygers
Posts: 114


If you could add a few digits of precision to that readout, you would find that the value is not actually zero, but really darn close to it. If you want I can publish my meter which can give a more precise reading on analog signals. It nicely illustrates why what rtm describes (100-90 and 100-80 both activate the same part of the sequencer), and it would show the non-zeroness of that signal.

That would be great if you could publish that meter. I could finally figure out where I'm going wrong and where the imprecision lies within my circuits.
2011-02-23 19:00:00

Author:
Shanghaidilly
Posts: 153


I'll throw it up on a level this evening, I did have enough time to quickly take a snapshot showing this specific case.

http://ib.lbp.me/img/ft/8aba632cac3a4b95c12efb4172a26bbdf15c9853.jpg

It's the same as the top part of your circuit; batteries at 90 and 30.
2011-02-23 19:06:00

Author:
Tygers
Posts: 114


If you could add a few digits of precision to that readout, you would find that the value is not actually zero, but really darn close to it. If you want I can publish my meter which can give a more precise reading on analog signals.

Yeah, why be satisfied with a mere 4 digits? XD I'm always interested to see the bigger and better meters people have constructed, even if the accuracy is unverifiable and the usefulness of the precision is questionable...

Assuming analog signals are implemented as some kind of binary representation with a power-of-two exponent - which seems very likely: the basic problem here is that 0.9 and 0.3 can't be represented exactly in that format. If you try this kind of calculation on a PC (for instance, in Python, by typing "0.9 - 0.3 - 0.3 - 0.3") you'll get a small non-zero value (about 1e-16 in my test.)

There's a very basic rule in computer programming as it relates to floating-point numbers: you can almost never count on an equality test returning true. A number will always equal itself, of course - but if you do any calculation on the number, precision errors will compound and even if, logically speaking, two numbers should be equal, due to the limits of precision, they won't. To make a meaningful equality test in cases like these you need to use an equality test that allows for some small error.

In LBP2 - compare a value using a direction combiner, and the result (if the values should be equal) will be very close to 0. Feed this result into another direction combiner as the positive input, and subtract a very small value. (Probably you're going to be limited by the game's available sensors for this: you could get something on the order of 1e-5 with a timer, for instance, possibly something smaller if you use a tag sensor with a huge range, and a tag at the very edge of its detection range...) Then if the result of that subtraction is negative, the values you were originally comparing are "equal" within that tolerance.



What I discovered is this. If I use two combiners in any circuit on its way to a sequencer, things work as expected. For instance if I set the first battery to 100, and just subtract 50 twice. However, more than two combiners result in the sequencer always giving a value, regardless of the signal input.

If that really is true, it could mean that the logic engine does some kind of pattern-recognition optimization to try to get the answers people expect in certain cases... It'd be interesting to see if that really is true.

The case of 1.0 - 0.5 - 0.5 isn't a good example, though: because those numbers and all the intermediate values of the computation can be represented exactly as a binary significand with power-of-two exponent. 1.0 - 0.5 - 0.25 - 0.25 should also yield an exact zero - so it might be a good test to see if your observation is really correct.
2011-02-23 19:08:00

Author:
tetsujin
Posts: 187


Yeah, why be satisfied with a mere 4 digits? XD I'm always interested to see the bigger and better meters people have constructed, even if the accuracy is unverifiable and the usefulness of the precision is questionable...

Assuming analog signals are implemented as some kind of binary representation with a power-of-two exponent - which seems very likely: the basic problem here is that 0.9 and 0.3 can't be represented exactly in that format. If you try this kind of calculation on a PC (for instance, in Python, by typing "0.9 - 0.3 - 0.3 - 0.3") you'll get a small non-zero value (about 1e-16 in my test.)

I've actually been able to pretty precisely measure the internal analog signal implementation; it has precisely 24 bits of significant information. (23 bit significand, plus the assumed 1, I.E. a single precision binary floating point number.) I was also able to measure the number of bits in the exponent by repeatedly dividing the signal by 255 until it compared as a true 0 by a sequencer, and it too was exactly what was expected for a single precision floating point number.

As far as unverifiable and questionable... They can actually be quite useful. For example, if I hook a 6 digit display up to a score sensor set to 100,000 then it becomes a current score readout. And the logic can then act based on the precise score. Unverifiable? Just produce some known repeating fraction using a counter. IE 1/7 should produce something close to 0.14285714.
2011-02-23 19:32:00

Author:
Tygers
Posts: 114


Perfect! Thanks for posting that pic. So it's not the imprecision of the sequencer, but the imprecision of using a combiner to do subtraction. Now if I could figure out how to normalize the output. I am curious how you obtained such precision when there's imprecision in the tools available to do the measuring.

EDIT: Oh, I see you've answered my question before I asked it. I'm rethinking my circuits now, and keeping in mind the imprecision of floating point numbers. Thanks to everyone for their help.
2011-02-23 20:13:00

Author:
Shanghaidilly
Posts: 153


That's the thing, you just know there is imprecision and as you rightly state, you can't measure anything accurately if it's pushing the bounds of the known accuracy of your test equipment. It does empirically prove that there is error though in the analogue processing systems we use though - something people were asking for the other day.


As for your earlier question about testing for almost equal to zero, just make a massive sequencer and place a battery hanging off the left hand side (so it activates on anything below 1/4th of a stripe). You can know what your actual margins of error are then, it will be +/- (0.25 / n) * 100%, where n is the number of stripes.
2011-02-23 20:18:00

Author:
rtm223
Posts: 6497


Perfect! Thanks for posting that pic. So it's not the imprecision of the sequencer, but the imprecision of using a combiner to do subtraction. Now if I could figure out how to normalize the output. I am curious how you obtained such precision when there's imprecision in the tools available to do the measuring.

The imprecision isn't in the tools, it's in the underlying representation.

When you convert 0.3 to floating-point binary, you get a value that is not exactly 0.3. Same goes for 0.9. So taking that inexact 0.3, multiplying it by 3, you get a value that's not exactly equal to your (inexact) 0.9.

However, some operations you can do without any precision loss. For instance, multiplying or dividing by a power of 2, or subtracting out a binary floating-point value of the same magnitude as the digits in another floating-point number.

So if you had 0.3 (in binary: 0.0100101 etc.) you could repeatedly pull the high-order bits out: "if the value is greater than or equal to 0.25 (binary 0.01), subtract that value (yielding 0.05)" - You can repeat that process until there are no bits left, without distorting the value along the way.


It does empirically prove that there is error though in the analogue processing systems we use though - something people were asking for the other day.


If you're referring to the "strongest of 4 analog signals" thing - I still haven't seen anything that would indicate that you can't rely on (min(x) - y = 0 for some y in the set x)... This case is very different.
2011-02-23 20:28:00

Author:
tetsujin
Posts: 187


That's the thing, you just know there is imprecision and as you rightly state, you can't measure anything accurately if it's pushing the bounds of the known accuracy of your test equipment. It does empirically prove that there is error though in the analogue processing systems we use though - something people were asking for the other day.


As for your earlier question about testing for almost equal to zero, just make a massive sequencer and place a battery hanging off the left hand side (so it activates on anything below 1/4th of a stripe). You can know what your actual margins of error are then, it will be +/- (0.25 / n) * 100%, where n is the number of stripes.

Thank you. Exactly what I needed.
2011-02-23 20:35:00

Author:
Shanghaidilly
Posts: 153


If you're referring to the "strongest of 4 analog signals" thing - I still haven't seen anything that would indicate that you can't rely on (min(x) - y = 0 for some y in the set x)... This case is very different.

Possibly the fact that we have no idea what the actual representation of data is and the exact processing involved is. I agree, subtracting a number from itself should return 0, and that a min should effectively be just comparison and assignment, even if they are floats... But


Do we know analog signals are stored as floating point?

Even if we all agree now that they probably are...


There's a very basic rule in computer programming as it relates to floating-point numbers: you can almost never count on an equality test returning true.

Why not stick by the well established rule of thumb? Sure, if you know exactly what the bit representation is and the operations carried out upon it then we should be fine. But we don't, so I still recommend to anyone to use margins of errors, as I always have done, since I first started talking about analogue comparators 4 months ago:


... an "equal to" test. Job done.

Well, sort of. Using that method, the difference between the two signals would have to be exactly zero, they really would have to be identical. However, in analogue systems, depending on what your input sources are, having two signals at exactly the same value is quite rare ... so we will often need margins of error on concepts like equality.



I still haven't seen anything that would indicate that you can't rely on
And we've seen nothing to actually demonstrate that you can rely on it, bar speculation after poking around at a black box. I've conceded that it will almost certainly work, but why assume? Adding margins of error is simply better practice. Your margins can be as small as 0.0625% (or 0.00625 if you prefer to consider range as -1 to 1), so it's not like you really lose any significant precision. It's also a slightly more efficient circuit.
2011-02-23 21:07:00

Author:
rtm223
Posts: 6497


I agree, subtracting a number from itself should return 0, and that a min should effectively be just comparison and assignment, even if they are floats... But


Do we know analog signals are stored as floating point?

Even if we all agree now that they probably are...


Whatever the representation, I can't think of anything that would cause (x-x != 0) - unless they were packing the other parts of the signal into low-order bits... If there is such a case, I think it'd be very interesting to find out about it.

I get what you're saying about putting the tolerance factor in - there's not really any down-side to it, if you're feeling like there's some possibility the test (without tolerance) would fail. But I don't feel it's worth guarding against an error scenario that I haven't seen, and don't believe would ever occur. One could put a lot of effort into guarding against such conditions, and still be bitten by some bug he couldn't foresee.

(EDIT): Aw, heIl. This is ridiculous. Your answer is probably the more sensible one in general anyway...
2011-02-23 21:53:00

Author:
tetsujin
Posts: 187


Whatever the representation, I can't think of anything that would cause (x-x != 0) - unless they were packing the other parts of the signal into low-order bits... If there is such a case, I think it'd be very interesting to find out about it.

I get what you're saying about putting the tolerance factor in - there's not really any down-side to it, if you're feeling like there's some possibility the test (without tolerance) would fail. But I don't feel it's worth guarding against an error scenario that I haven't seen, and don't believe would ever occur. One could put a lot of effort into guarding against such conditions, and still be bitten by some bug he couldn't foresee.

He's not concerned by x - x != 0, but rather the exact value of x being modified during one of the compare operations. So really, it's f(x) - x, where f() is some unknown function that we THINK is just an identity function, but may not be. It seems unlikely, but not impossible. And as rtm said, all we have is a black box to look at.
2011-02-24 00:13:00

Author:
Tygers
Posts: 114


That would be great if you could publish that meter. I could finally figure out where I'm going wrong and where the imprecision lies within my circuits.

Published http://lbp.me/v/x6rgs9

The shorter of the two in that level will not always display the non-zero part of zero looking values, but it will show all the 0s if it is there. The longer should be able to show any non-zero signal the game can produce normally which is close to 0.
2011-02-24 14:50:00

Author:
Tygers
Posts: 114


LBPCentral Archive Statistics
Posts: 1077139    Threads: 69970    Members: 9661    Archive-Date: 2019-01-19

Datenschutz
Aus dem Archiv wurden alle persönlichen Daten wie Name, Anschrift, Email etc. - aber auch sämtliche Inhalte wie z.B. persönliche Nachrichten - entfernt.
Die Nutzung dieser Webseite erfolgt ohne Speicherung personenbezogener Daten. Es werden keinerlei Cookies, Logs, 3rd-Party-Plugins etc. verwendet.