Home    LittleBigPlanet 2 - 3 - Vita - Karting    LittleBigPlanet 2    [LBP2] Suggestions
#1

Multi-conductor cables

Archive: 20 posts


Consider the case where you're building a logic circuit with binary numbers passed around on multiple wires: or a set of control signals that's common to multiple components, something like that... In these sorts of cases, you may have circuits which output a particular set of signals, and circuits which use that same set of signals - but to connect them together you still have to wire up each individual signal one at a time.

So what I'm suggesting is to have a system for grouping a set of signals into a "multi-conductor cable", so that when you've defined a circuit that produces this common set of signals and another circuit that uses these signals, you have just one connection to make between the two instead of a dozen. This would also make things simpler for the wire layout engine. (I love the wire layout engine - it does a great job of establishing order out of a chaotic circuit - but the thing needs help! Wire up a complicated circuit and the layout engine slows to a crawl... It's much happier if you can arrange for fewer wires to be drawn!)

Internally, I expect a feature like this would be implemented just as a convenience tool for creating, destroying, and rearranging multiple connections at once. So there would be no connecting a multi-connector directly to a sequencer input, for instance. (There could be sensible ways to handle connection of a multi-connector to other things - like logic gates - but it's probably not worth the effort.)

I'd envision a couple different ways a feature like this could be implemented:
First, a new logic item could be introduced: the "multiplexer". Tweak a multiplexer to set the "connector name" (you can only connect two multiplexers together if they have the same connector name - much as you can only connect a USB connector to a USB connector, or a headphone plug to a headphone jack. Alternately, connectors could be universally compatible by matching input/output names.) Open up the multiplexer's board and there will be a certain set of inputs and outputs, all wired up to the multi-conductor cable. These inputs and outputs can be named in the same way as is done with circuit board inputs and outputs - and the circuit board could host logic like any other circuit board. Tweaking a multiplexer to change its "cable end" switches its inputs to outputs and vice versa - cable ends must have opposite "cable end" settings in order to be connected. Then, to create multi-conductor connections between circuit boards (or even between different parts of a single board) - wire up the multiplexers at each end as desired and then wire the two multiplexers together. If a board boundary is crossed, a multi-conductor connection at the boundary will be established.

Alternately, the establishment of multi-conductors could be done at regular microchip boundaries without a separate "multiplexer" element: for instance, when tweaking a microchip input or output, in addition to assigning the connection's name you could specify that the connection is part of a multi-conductor connector. I think this approach is a little more complicated than the multiplexer method, though - and how would you define a single circuit board with multiple connections of the same type?

To distinguish multi-conductors from regular wires, I guess they could be drawn as ribbon cables or just thicker cables...

Then if somebody's got a microchip with a multi-conductor connection on it and they want to make normal (non-muliplexed) connections to it they would either need to create a matching multiplexer outside the microchip (to patch in the desired connection) or else bypass the connector, wiring a signal directly into a node on the circuit board...
2011-02-14 20:54:00

Author:
tetsujin
Posts: 187


how about a RF chip? set the color/label and set the RF chip to input. and set the same color/label on one set to output. now adjust them to the correct number of slots. then anything plugged into the input chip will come out the output chip, in the corresponding position.

This solves multiple problems:

remote control via tags has no analog.
long wires do not need to be drawn, there would be none.
and if you want to have remote control over an object that needs re emitted you can use just one RF chip pair. instead of crap loads of tags/sensors.
2011-02-14 22:12:00

Author:
zeel
Posts: 61


remote control via tags has no analog.

You can transfer analogic signal via tag sensors if you set its Output Value to Signal Strength instead of closeness.

This is good idea when making huge microchip whit lots of wire.

What I want more is ID system that would allow sending data from and to emitted object.
2011-02-15 06:00:00

Author:
waD_Delma
Posts: 282


Tags are already based on idea of RFID (or NFC wider) technology . And as Delma said it can send analog signal... or else my tetris owuld not have speed setting. We call it wireless logic
Well multi-coridor wires would make distribution a little bit annoying, through i like idea as inside the circut. LBP2 already support wire overlap but would be nice if you could control that


What I want more is ID system that would allow sending data from and to emitted object.

But this is already provided by labels and colors on tags and i don't see how you would find id of a object and utilize it in logic circutry. If you need any kind of selection in group of same objects then make object reactive or send signals if specific conditions are meet. For example you want to destroy objects that are in specific zone, then other tag and sensor with label "can_destroy" and and gate it with "destroy" sensor. You can also play with zones to do that, or you can also use holograms with tags and impact sensors on "tag requiered" and send signal this way.
2011-02-15 15:53:00

Author:
Shadowriver
Posts: 3991


But this is already provided by labels and colors on tags and i don't see how you would find id of a object and utilize it in logic circutry.

You cannot do that if you want to emit that object. The tag is same to all emitted versions.
2011-02-15 17:11:00

Author:
waD_Delma
Posts: 282


Ahh you're talking about something similar to parent / child / sibling relationships in data structures, which is something I'd quite like, along with the ability for an emitted object to differentiate between it's own tags and it's siblings' tags

Though TBH, being able to signal to several different emitted objects all emitted from the same emitter independently from a parent is actually a remarkably complex interface challenge - due to the ever changing (potentially) nature of emitted objects as they are created and destroyed. The system would have to assign lowest available ID to each object upon creation, which would be a very difficult system to actually control...
2011-02-15 17:28:00

Author:
rtm223
Posts: 6497


The discussion has wandered away entirely from what I was suggesting in the first post - but it's cool. Play the ball where it lies, I say. I think a remote control version of my suggestion would be great, too - but the reason I suggested a wired version is because I think it could be a useful organizational tool for big wired circuits...

Some means of establishing an independent control channel to newly-emitted objects would be pretty cool, I agree. It's hard to picture how that would work with the current state of LBP2 logic, though. The emitter would (presumably) have to output a signal that contains some kind of unique identifier for the newly emitted object, and then control logic would need to be able to use that identifier - it's a little different from the current system of signals over wires, as each new ID that comes out must be recognized as a discrete "message".
2011-02-15 17:45:00

Author:
tetsujin
Posts: 187


Consider the case where you're building a logic circuit with binary numbers passed around on multiple wires...

You can kinda do this already by using a DAC and an ADC. Single-precision floating points have a 24-bit mantissa, but you might lose a couple to rounding errors.
2011-02-15 20:19:00

Author:
Aya042
Posts: 2870


You can kinda do this already by using a DAC and an ADC. Single-precision floating points have a 24-bit mantissa, but you might lose a couple to rounding errors.

Do we know analog signals are stored as floating point? I mean, analog signals are never outside of the range [-1, 1] - and there's not much practical use in-game to super low-magnitude signals, either. (When is it useful to distinguish between 1e-10 and 0 in LBP? How long would it take to notice the difference between a motor bolt that doesn't turn, and one that turns one degree every 10 months?)

A bit off-topic, but every now and then people talk about how many bits of precision an analog signal has and I have to wonder if they really know. It seems to me that representing analog signals in 16-bit fixed-precision would probably be sufficient... Though maybe it's more probable that they fit the analog signal, digital signal, and player ID all in a single 16 or 32-bit word to save space - which would mean either 12 bits or 28 bits for analog, including the sign bit...

But about the DAC/ADC thing - yes, you can do that - and I've found cases where it's useful. But that means adding a bunch of logic to your circuit, to pack bits into an analog signal and then pull them out again. What I'm talking about wouldn't add any logic to the circuit (and it'd be able to carry analog signals, not just digital signals) - it would just serve as an organizational aid, reducing the number of visible wires, and reducing the number of connections you have to deal with by hand, in cases where you have a bunch of circuits that connect together in the same way...

As an example - last night I built a sequencer circuit with a bunch of microchips on it activated by batteries. The output is taken from whichever microchip is active (by way of its battery) - to simplify wiring I did that by chaining them together - output of one chip into the inputs of the next. I wouldn't want to replicate a DAC/ADC pair for each chip in the sequencer, that would start to impact the thermo if there's a bunch of copies of the sequencer in the level - but if I take a chip out of the sequence and reduce the size of the sequencer, I want to have fewer connections to re-wire. Things like that.
2011-02-15 21:30:00

Author:
tetsujin
Posts: 187


Do we know analog signals are stored as floating point?

Not for certain, but based on some observations I'm fairly sure they are.
2011-02-15 23:02:00

Author:
Aya042
Posts: 2870


Not for certain, but based on some observations I'm fairly sure they are.

What sort of observations? I'm curious...
2011-02-15 23:30:00

Author:
tetsujin
Posts: 187


What sort of observations? I'm curious...

Well, it all started with speculation about the 160-hour bug (https://lbpcentral.lbp-hub.com/index.php?t=23971-Thin-or-theck-gas&p=416799#post416799), particularly the observations about the nature of the SPE chip, and how well-suited it is for crunching single-precision floats.

When we found oddities with the output values of three-way switches which also pointed towards the use of floating point arithmetic, I concluded that most of the math used in the simulation was floating point.

Same thing cropped up during testing the division of analog signals - the seemingly finite resolution also pointed in that direction.

To be fair, the same behaviour could occur with fixed-point arithmetic, but seeing as there's no native support for that, it would seem a poor choice if you were trying to write efficient code.

Similarly, using integer arithmetic with scale factors would also require the CPU to process more instructions than using floats, so floating point would seem to be a better choice, also considering that using the SPE to crunch 8x32-bit floats (with 24-bit mantissa) still provides more resolution than 8x16-bit integers.
2011-02-15 23:46:00

Author:
Aya042
Posts: 2870


To be fair, the same behaviour could occur with fixed-point arithmetic, but seeing as there's no native support for that, it would seem a poor choice if you were trying to write efficient code.


Fixed-point math is just integer math with bit-shifts. And since the inputs are constrained to [-1,1] you don't even need the bit-shifts... You just have to guard for results outside that range, which you'd have to do anyway for signals.



Similarly, using integer arithmetic with scale factors would also require the CPU to process more instructions than using floats


But storage space could be an issue as well. That affects not only how much RAM you're using but also how often you wind up with cache misses. It seems to me it could go either way as an implementation decision - which is why I hesitate to assume I know what decision they made.
2011-02-16 02:14:00

Author:
tetsujin
Posts: 187


Fixed-point math is just integer math with bit-shifts. And since the inputs are constrained to [-1,1] you don't even need the bit-shifts...

Surely with analog signals, you have a range of several discrete values between -1, and +1. I've determined the resolution is at least 0.0000001, so you would necessarily need to bit-shift it if you were going to perform mathematical operations on these were they stored as integers, otherwise you'd easily overflow even a 32-bit integer.



But storage space could be an issue as well. That affects not only how much RAM you're using but also how often you wind up with cache misses.

True, but these days memory is so cheap relative to CPU time, that it's generally a secondary concern. The impression I get is that the SPE can crunch 32-bit floats faster than 32-bit integers (which is perhaps counter-intuitive w.r.t. most hardware), and your memory argument would only make sense were they using 16-bit integers (which IMO don't have sufficient resolution to be used across the board), as, clearly, 32-bit floats and 32-bit ints both take up... 32 bits.



It seems to me it could go either way as an implementation decision - which is why I hesitate to assume I know what decision they made.

I think someone accustomed to traditional PC hardware would assume that integers would make more sense, but their benefits only apply to traditional hardware - if the SPE is better at doing floating-point operations, then (even with all their faults) they would seem a more logical choice.

Of course, there's probably no definitive way to determine which is the case, but based on what I do know, I believe they are using floats in most cases.
2011-02-16 09:20:00

Author:
Aya042
Posts: 2870


This is largely an irrelevant matter - but I'm enjoying the discussion... So here's some more.


Surely with analog signals, you have a range of several discrete values between -1, and +1. I've determined the resolution is at least 0.0000001, so you would necessarily need to bit-shift it if you were going to perform mathematical operations on these were they stored as integers, otherwise you'd easily overflow even a 32-bit integer.


You wouldn't need to bit-shift in order to add or subtract, because the "decimal point" (it's not decimal, you know what I mean) is in the same place in source and destination. Multiplication and division are another matter - but that's not supported in the logic system anyway. Once that signal hits something that affects the physical simulation, at that point fixed-point could be translated to floating-point. (It seems to me the precision needs of the digital simulation are entirely different from those of the physical simulation...)


True, but these days memory is so cheap relative to CPU time, that it's generally a secondary concern.


Memory use does add up. It generally doesn't pay to be careless with it.
The PS3 has a total of 256MiB (discounting video RAM - game consoles don't seem to have an abundance of RAM for some reason... Always seemed odd to me) - no matter how cheap RAM gets, the PS3 will never have more RAM than that. Plus, the way those SPE's work, basically, is they have their own local storage for instructions and data - 256KiB. Any data and instructions you want the SPE to operate on has to fit into that space.


The impression I get is that the SPE can crunch 32-bit floats faster than 32-bit integers


I believe you quoted some figures that indicate it's twice as fast for 32-bit floats as for 32-bit integers... Though Wikipedia says the speed for 32-bit ints and 32-bit floats is the same. (Which makes sense, since the SPE is dealing with 128 bits at a time)


your memory argument would only make sense were they using 16-bit integers (which IMO don't have sufficient resolution to be used across the board), as, clearly, 32-bit floats and 32-bit ints both take up... 32 bits.


You're forgetting that the analog is just one piece of a signal. There's a digital component as well (signed, so it takes an additional 2 bits) - plus the player identification (another 2 bits). If space were a concern, they'd be looking to stick the analog and digital parts of the signal into a single word of memory...

So if they did have to make the choice based on memory usage - they could steal four bits from a 32-bit integer or they could steal four bits from a 32-bit float.

If they did this in a 32-bit integer, they could stuff that extra data in high-order bits of the integer, leaving the lower bits to represent the fractional quantity - When they compare magnitude (for AND, OR, etc.) or subtract (for signal combiner) - they would probably have to mask out the extra bits in the input and/or the output, depending on the operation, and then re-apply the other parts of the signal afterward...

Consider this hypothetical implementation. Signals (analog + digital + player info) are stored in a 32-bit field. The top bit (31) is a sign bit for the analog component. Bits 30 and 29 contain the sign and magnitude of the digital-logic portion of the signal. Bits 28 and 27 contain the player ID. Bits 26-24 are zero. Bits 0-23 contain a fixed-point number. If bit 23 is set, the absolute value of the number is 1.0 and bits 0-22 should be clear.

So when you need to perform math on these values' magnitudes, you just blank out the top 8 bits and treat the rest of the number as an unsigned integer. That gets you the magnitude comparison needed for AND and OR. "Signal combiner" needs a bit more attention - after blanking out the high-order bits of the inputs, you would subtract them, producing a signed answer which would need to be converted back into the sign + magnitude representation if it yields a negative result. (i.e. negate the 32-bit signed result, set the top bit, and truncate the 24-bit value to make sure it's within [-1, 1]...)

This representation seems a bit cumbersome unless you consider the limitations of the game - most logic operations ignore the sign of the value and operate on the magnitude. There's not direct support for addition, multiplication, or division - only subtraction (and, again, it's subtraction of the magnitudes of the inputs...) So there's really not an opportunity for a signal to overflow its range.

You may wonder why my speculative design uses a 24-bit field instead of a 28-bit field for the value: the reason for that is because it puts everything in place to conveniently convert one of these values into a single-precision float with just a few instructions. There's two cases to consider:
If the magnitude of the value is zero, then bits 24-30 are set to zero as well.
Otherwise, pull the top byte out, and set bits 0-6 (leaving the sign-bit as-is.) Shift the 32-bit field left until bit 23 is set, and decrement that copy of the top byte once for each time you bit-shift. Then store the top byte back into the 32-bit field.

So any time you take a value out of the digital logic engine and apply it to something physical (like a motor bolt, etc.) the conversion to floating-point is easy. The payoff is that, during the digital logic phase, you're able to fit those other parts of the signal into the same 32-bit word.

You could do something similar with floating-point: for instance, steal the extra 4 bits from low-order bits of the significand (reducing precision) or from high-order bits of the exponent (reducing range while the value is in the digital engine)

Reducing the range of the exponent isn't a big deal, since huge values are impossible in the digital engine and tiny values are irrelevant - but taking the extra bits from the exponent would mean that those bits would always need to be set back to their assumed value before performing a logical operation (a bitwise OR on the top byte) - and if a logical operation yielded a value whose exponent did not have the expected bits in those positions (i.e. a very small value), the value would have to be treated as zero.

Taking low-order bits from the significand would surely be simpler. That would reduce the significand to 20 bits of precision (still quite adequate!). It'd even be reasonably safe, when doing calculations on those values, to not mask them out of the inputs, and just mask them back in on the result of a calculation. (For magnitude comparison it doesn't matter. For subtraction it could produce a tiny error - I don't know whether that error would matter...)

So actually, I guess using a float and taking bits from the significand might even be simpler than the fixed-point approach. The only penalty is the four bits of precision loss, and that's probably not an issue. Taking the bits from the exponent is probably the messiest of the three options...



I think someone accustomed to traditional PC hardware would assume that integers would make more sense

It's not really an issue of PC hardware - though certainly when I was first learning about 3-D graphics on PCs back in the 90s, there was an emphasis on integer math that doesn't really make sense any more.

It's just that they do need to fit those four bits in somewhere - and the digital engine doesn't actually need the range offered by floats. So an approach like that could make sense.
2011-02-16 20:03:00

Author:
tetsujin
Posts: 187


Once that signal hits something that affects the physical simulation, at that point fixed-point could be translated to floating-point.

But why would you implement the system that way, when it would be less overhead to simply express everything at floating point?

A simulation engine running at 30fps is likely to be more CPU-bound than IO-bound, so if I can calculate the speed of, say, a Mover set to Speed Scale by simplying multiplying the analog input signal by the Maximum Speed setting in a single CPU instruction, what possible reason would there be to add the several hundred addtional instructions required to convert fixed point to floating point?



It seems to me the precision needs of the digital simulation are entirely different from those of the physical simulation...

Certainly with the digital aspect of the simulation, it would make more sense to use integer arithmetic, but even then, attempting to minimize the number of bits used adds additional instruction overhead in packing/unpacking. If performance is the primary concern, it's better to stick with the word sizes which the processor operates on natively.



Memory use does add up. It generally doesn't pay to be careless with it.

True, but I did say "secondary concern", not "completely irrelevant".

However, given that there's a limit of about 2000 "things" per level, and assuming you could use all 256MiB to store the attributes for each "thing", that gives about 131KB per "thing". Frankly, even with highly suboptimal code, you probably only need about in the region of 100 bytes per "thing", meaning your simlation fits into about 200KB. Obviously some things require more memory (like bitmapped images), but the memory usage of the vector-based part is pretty trivial.



I believe you quoted some figures that indicate it's twice as fast for 32-bit floats as for 32-bit integers... Though Wikipedia says the speed for 32-bit ints and 32-bit floats is the same. (Which makes sense, since the SPE is dealing with 128 bits at a time)

Odd. The WP page seems to have "changed its mind" since that text was quoted.



You're forgetting that the analog is just one piece of a signal. There's a digital component as well (signed, so it takes an additional 2 bits) - plus the player identification (another 2 bits). If space were a concern, they'd be looking to stick the analog and digital parts of the signal into a single word of memory...

You'd probably never implement it that way - trying to pack all that into a single word would require too much overhead. Each of those pieces of information would most likely be stored in separate variables. I have insufficient information to determine exactly how it's implemented, so I can only judge it based on how I would implement such a system if I had to, and based on previous experience, there's no way I'd attempt to pack several values into a single word unless it was absolutely necessary.
2011-02-16 21:12:00

Author:
Aya042
Posts: 2870


I was gonna post the processor overhead / packing and unpacking, which my back of a napkin estimates tell me would pretty much double your processing at every component, but I was too lazy to bother. As Aya says, you wouldn't pack data like that unless you really had to and in a real-time system, I see clock cycles being precious.

Also, to the comment about multiplication / division not being needed in analogue? Really, because I'm thinking that timers, sequencers, movers, pistons, rotators, tag and player sensors

*deep breath*

counters, projectiles sensors, score sensors, gravity switches, sound objects and probably some devices that I've forgotten about, all need to do either division or multiplication to deal with / generate an analogue signal (in some cases both). Indeed, the components that don't do multiplication or division are in the minority.

When we discussed this before, I suggested another argument which is that the physics, by assumption, would be floats. In this case LBP1's analogue signals would only ever have been floats and this legacy factor could be highly influential.
2011-02-16 21:45:00

Author:
rtm223
Posts: 6497


But why would you implement the system that way, when it would be less overhead to simply express everything at floating point?

A simulation engine running at 30fps is likely to be more CPU-bound than IO-bound, so if I can calculate the speed of, say, a Mover set to Speed Scale by simplying multiplying the analog input signal by the Maximum Speed setting in a single CPU instruction, what possible reason would there be to add the several hundred addtional instructions required to convert fixed point to floating point?


The nonzero case of converting my suggested fixed-point format to floating did turn out more complicated than I'd expected. Having to shift-left in a loop to get the highest-order one-bit into bit 24, due to IEEE-754's use of an implicit top-bit in the significand... Yeah, that would be a deal-breaker. I didn't realize the loop would be that costly until after I'd written out the process and you pointed out that's a lot of work... I don't know if there's a more efficient way to do the conversion, but - yeah, that's good, you've convinced me fixed-point is almost certainly not a viable option.


However, given that there's a limit of about 2000 "things" per level, and assuming you could use all 256MiB to store the attributes for each "thing"


That's a ridiculously generous assumption. Remember some of that 256MiB includes your program code, sound effects, the big physical simulation that's going on at the same time, etc... People occupy most or all of that space on a regular basis with games that don't provide the range of freedom LBP does.



Odd. The WP page seems to have "changed its mind" since that text was quoted.


Yeah, ****ed unreliable information... Still, I think the version I quoted is probably correct. If it operates on 128 bits in an instruction, it can't very well operate on more data than will fit into that 128 bits, right?


I was gonna post the processor overhead / packing and unpacking, which my back of a napkin estimates tell me would pretty much double your processing at every component, but I was too lazy to bother. As Aya says, you wouldn't pack data like that unless you really had to and in a real-time system, I see clock cycles being precious.


I don't see it. I mean, whether you pack or don't pack, you're still constantly dealing with both the logical and analog parts of signals in parallel. Let's look at the AND gate, for instance. And, since Aya rightly pointed out that the shift-left loop for converting from my suggested fixed-point scheme to float would be too costly (and I somehow didn't realize this... I must be getting old...) let's assume for the sake of this example that they're stealing the 4 low-order bits from the significand for that extra signal data.

AND, of course, works as min() on the magnitude of the analog inputs, and logical AND on the digital...
So, with two inputs and a target register, the analog step could be handled first: dest = min(abs(a), abs(b)). Those four low-order bits containing the packed digital signal and player ID don't need to be blanked out of the inputs for this operation...
Next, the digital step: dest &= 0xfffffff0; dest |= (a&b & (1 << ON_bit)); x = (a^b & (1 << DIGITAL_SIGN_bit)); dest |= (x | (x << (31 - DIGITAL_SIGN_bit))) - and then I don't know what you do with the player ID...

I don't know the PS3 instruction set, mind you - but generally bitwise operations are pretty cheap. That said, this example does use more of them than a non-packed implementation would. About eight instructions (which may or may not include fetch/store, depending on the instruction set) for the digital logic part of the AND gate, I figure - where if the digital and analog portions were each stored as floats, you could just multiply all the digital inputs together (to get the correct sign and magnitude for the digital) and then pick the sign bit from the digital signal and apply it to the analog signal (two bitwise operations, plus fetch and store) - I think it could be viable if memory is at a premium.



Also, to the comment about multiplication / division not being needed in analogue? Really, because I'm thinking that timers, sequencers, movers,
(etc., etc., etc.)
gravity switches, sound objects and probably some devices that I've forgotten about, all need to do either division or multiplication to deal with / generate an analogue signal (in some cases both).


What I'm saying is multiplication isn't something you can directly do in logic. The devices that perform multiplication for their own purposes - those would be boundary points where the conversion happens, if there were a conversion to be done.



*deep breath*


Too lazy to post info about the CPU overhead of packing/unpacking but you're fine dumping the entire list of LBP2 gadgets on me? Come on...



When we discussed this before, I suggested another argument which is that the physics, by assumption, would be floats. In this case LBP1's analogue signals would only ever have been floats and this legacy factor could be highly influential.

The physics simulation I wouldn't imagine would use anything else. It seemed to me there was a possibility this wouldn't be the case for the logic engine - but I don't know, really. I know it's silly to prognosticate over these kinds of questions - particularly since there are folks out there to whom this information is known fact - but I find it fun, I hope you guys do as well.
2011-02-17 01:17:00

Author:
tetsujin
Posts: 187


Consider the case where you're building a logic circuit with binary numbers passed around on multiple wires: or a set of control signals that's common to multiple components, something like that... In these sorts of cases, you may have circuits which output a particular set of signals, and circuits which use that same set of signals - but to connect them together you still have to wire up each individual signal one at a time.

So what I'm suggesting is to have a system for grouping a set of signals into a "multi-conductor cable", so that when you've defined a circuit that produces this common set of signals and another circuit that uses these signals, you have just one connection to make between the two instead of a dozen. This would also make things simpler for the wire layout engine. (I love the wire layout engine - it does a great job of establishing order out of a chaotic circuit - but the thing needs help! Wire up a complicated circuit and the layout engine slows to a crawl... It's much happier if you can arrange for fewer wires to be drawn!)

Internally, I expect a feature like this would be implemented just as a convenience tool for creating, destroying, and rearranging multiple connections at once. So there would be no connecting a multi-connector directly to a sequencer input, for instance. (There could be sensible ways to handle connection of a multi-connector to other things - like logic gates - but it's probably not worth the effort.)

I'd envision a couple different ways a feature like this could be implemented:
First, a new logic item could be introduced: the "multiplexer". Tweak a multiplexer to set the "connector name" (you can only connect two multiplexers together if they have the same connector name - much as you can only connect a USB connector to a USB connector, or a headphone plug to a headphone jack. Alternately, connectors could be universally compatible by matching input/output names.) Open up the multiplexer's board and there will be a certain set of inputs and outputs, all wired up to the multi-conductor cable. These inputs and outputs can be named in the same way as is done with circuit board inputs and outputs - and the circuit board could host logic like any other circuit board. Tweaking a multiplexer to change its "cable end" switches its inputs to outputs and vice versa - cable ends must have opposite "cable end" settings in order to be connected. Then, to create multi-conductor connections between circuit boards (or even between different parts of a single board) - wire up the multiplexers at each end as desired and then wire the two multiplexers together. If a board boundary is crossed, a multi-conductor connection at the boundary will be established.

Alternately, the establishment of multi-conductors could be done at regular microchip boundaries without a separate "multiplexer" element: for instance, when tweaking a microchip input or output, in addition to assigning the connection's name you could specify that the connection is part of a multi-conductor connector. I think this approach is a little more complicated than the multiplexer method, though - and how would you define a single circuit board with multiple connections of the same type?

To distinguish multi-conductors from regular wires, I guess they could be drawn as ribbon cables or just thicker cables...

Then if somebody's got a microchip with a multi-conductor connection on it and they want to make normal (non-muliplexed) connections to it they would either need to create a matching multiplexer outside the microchip (to patch in the desired connection) or else bypass the connector, wiring a signal directly into a node on the circuit board...

I was with you all the way up to 'consider'

...and I see the topic only went uphill from there!
2011-02-17 01:40:00

Author:
Bovrillor
Posts: 309


That's a ridiculously generous assumption.

That was merely to show the absolute upper limit. The next sentence was supposed to be a more realistic amount, in order to see the comparision between the two, i.e. that the simulation part is potentially only using a very small fraction of the available memory.

To get a rough idea, you can export your level to a file (which necessarily contains all the state information from the simulation), and look at the filesize - you may be surprised how small it actually is. Even assuming it's compressed, it's probably accurate to within an order of magnitude.



Next, the digital step: dest &= 0xfffffff0; dest |= (a&b & (1 << ON_bit)); x = (a^b & (1 << DIGITAL_SIGN_bit)); dest |= (x | (x << (31 - DIGITAL_SIGN_bit))) - and then I don't know what you do with the player ID...

That's still a fair number of additional instructions. You generally wouldn't bother just to save a couple of words per "thing". Assuming 2000 "things", given the choice between saving 8k and the simulation running two or three times faster, most people would choose the latter.



Too lazy to post info about the CPU overhead of packing/unpacking but you're fine dumping the entire list of LBP2 gadgets on me? Come on...

To be fair, I don't think any of us knows enough about the hardware to be able to give anything more than a ballpark estimate, nor, frankly, much of an incentive to learn.



I know it's silly to prognosticate over these kinds of questions - particularly since there are folks out there to whom this information is known fact - but I find it fun, I hope you guys do as well.

I suppose to the point where it's useful (as a creator) in order to determine how to build things in such a way as to minimize thermo use, or to work around LBP's 'misfeatures' - beyond that, it's largely academic.
2011-02-17 10:25:00

Author:
Aya042
Posts: 2870


LBPCentral Archive Statistics
Posts: 1077139    Threads: 69970    Members: 9661    Archive-Date: 2019-01-19

Datenschutz
Aus dem Archiv wurden alle persönlichen Daten wie Name, Anschrift, Email etc. - aber auch sämtliche Inhalte wie z.B. persönliche Nachrichten - entfernt.
Die Nutzung dieser Webseite erfolgt ohne Speicherung personenbezogener Daten. Es werden keinerlei Cookies, Logs, 3rd-Party-Plugins etc. verwendet.