![](https://static.fresh.co.il/images/vBulletin/misc/blank.gif) |
![ישן](https://static.fresh.co.il/images/vBulletin/statusicon/post_old.gif)
27-01-2013, 21:32
|
|
|
חבר מתאריך: 11.09.06
הודעות: 10,227
|
|
בתגובה להודעה מספר 6 שנכתבה על ידי שטורס שמתחילה ב "אתה כמובן צודק. אני לא מכיר מערכת מחשוב שאיננה מחשב על מתקדם במיוחד"
נאמר כך- עם המצלמה כוללת כמה גלאים של מצלמות סלולאר, מה הבעיה לשים כמה עשרות מעבדים, וליצור "מחשב על" שמטרתו לעבד את האות?
כאן יש ניתוח של ה- ARGUS בידי מהנדס שעבד על ה- GOOGLE Street View, שראה את התוכנית שארנבתול הביא, ושיעניין את מי שבתחום (הנדסת מערכת בסיסית, ברמת טולרנסים של עדשות במשרע טמפ', ארכיטקטורת חומרה וכו'). הוא מדבר על דחיסה לרמת עשירית ביט לפיקסל, במטעד עצמו.
ציטוט:
Here's what I would have done to respond to the original RFP at the time. Note that I've given this about two hours' thought, so I might be off a bit:
- I'd have the lenses and sensors sitting inside an airtight can with a thermoelectric cooler to a heat sink with a variable speed fan, and I'd use that control to hold the can interior to between 30 and 40 C (toward the top of the temperature range), or maybe even tighter. I might put a heater on the inside of the window with a thermostat to keep the inside surface isothermal to the lens. I know, you're thinking that a thermoelectric cooler is horribly inefficient, but they pump 3 watts for every watt consumed when you are pumping heat across a level. The reason for the thermoelectric heat pump isn't to get the sensor cold, it's to get tight control. The sensors burn about 600 mW each, so I'm pumping 250 watts outs with maybe 100 watts.
- I'd use a few more sensors and get the sensor overlap up to 0.25mm, which means +/-0.5% focal length is acceptable. I designed R5 and R7 with too little overlap between sensors and regretted it when we went to volume production. (See Jason, you were right, I was wrong, and I've learned.)
- Focal plane is 9 x 13 sensors on 10.9 x 8.1 mm centers. Total diameter: 105mm. This adds 32 sensors, so we're up to an even 400 sensors.
- Exiting the back of the fine gimbal would be something like 100 flex circuits carrying the signals from the sensors.
- Hook up each sensor to a Spartan-3A 3400. Nowadays I'd use an Aptina AR0330 connected to a Spartan-6, but back then the MT9P001 and Spartan-3A was a good choice.
- I'd have each FPGA connected directly to 32GB of SLC flash in 8 TSOPs, and a 32-bit LPDDR DRAM, just like we did in R7. That's 5 bytes per pixel of memory bandwidth, which is plenty for video compression.
- I'd connect a bunch of those FPGAs, let's say 8, to another FPGA which connects to gigabit ethernet, all on one board, just like we did in R7. This is a low power way to get connectivity to everything. I'd need 12 of those boards per focal plane. This all goes in the gimbal. The 48 boards, and their power and timing control are mounted to the coarse gimbal, and the lenses and sensors are mounted to the fine gimbal.
- Since this is a military project, and goes on a helicopter, I would invoke my fear of connectors and vibration, and I'd have all 9 FPGAs, plus the 8 sensors, mounted on a single rigid/flex circuit. One end goes on the focal plane inside the fine gimbal and the other goes on the coarse gimbal, and in between it's flexible.
- I'd connect all 52 boards together with a backplane that included a gigabit ethernet switch. No cables -- all the gigE runs are on 50 ohm differential pairs on the board. I'd run a single shielded CAT-6 to the chopper's avionics bay. No fiber optics. They're really neat, but power hungry. Maybe you are thinking that I'll never get 274 megabits/second for the Common Data Link through that single gigE. My experience is otherwise: FPGAs will happily run a gigE with minimum interpacket gap forever, without a hiccup. Cheap gigE switches can switch fine at full rate but have problems when they fill their buffers. These problems are fixed by having the FPGAs round-robin arbitrate between themselves with signals across that backplane. Voila, no bandwidth problem.
- The local FPGA does real time video compression directly into the flash. The transmission compression target isn't all that incredible: 1 bit per pixel for video. That gets 63 channels of 640x400x15 frames/sec into 274 Mb/s. The flash should give 1 hour of storage at that rate. If we want 10 hours of storage, that's 0.1 bits/pixel, which will require more serious video compression. I think it's still doable in that FPGA, but it will be challenging. In a modern Spartan-6 this is duck soup.
- The computer tells the local FPGAs how to configure the sensors, and what bits of video to retrieve. The FPGAs send the data to the computer, which gathers it up for the common data link and hands it off.
- I'll make a guess of 2 watts per sensor+FPGA+flash, or 736 watts. Add the central computer and switch and we're at 1 kilowatt. Making the FPGAs work hard with 0.1 bit/pixel video compression might add another 400 watts, at most.
- No SSDs, no RAID, no JPEG compression chips, no multiplexors, no fiber optic drivers, no high speed SerDes, no arrays of multicore X86 CPUs. That's easily half the electronics complexity, gone.
UPDATE: Nova ran a program on 23-Jan-2013 (Rise of the Drones) which talks about ARGUS-IS. They present Yiannis Antoniades of BAE systems as the inventor, which suggests I have the relationship between BAE and ObjectVideo wrong in my description above. They also say something stupid about a million terabytes of data per mission, which is BS: if the camera runs for 16 hours the 368 sensors generate 2,000 terabytes of raw data.
They also say that the ARGUS-IS stores the entire flight's worth of data, also wrong. They've got 32 laptop drives in the system (one per single board computer). If those store 300 GB apiece, that's 10 terabytes of total storage. 16 hours of storage would require 0.05 bits/pixel -- no way. The JPEG2000 compressor chips are more likely to deliver 0.2 bits/pixel, which means they might be storing one of every four frames.
|
|
|