Hsinchu, Taiwan
Hsinchu, Taiwan

Time filter

Source Type

Cheng B.,National University of Singapore | Ni B.,National University of Singapore | Yan S.,National University of Singapore | Tian Q.,CS
MM'10 - Proceedings of the ACM Multimedia 2010 International Conference | Year: 2010

In this paper, we propose an intelligent photography system, which automatically and professionally generates/recommends user-favorite photo(s) from a wide view or a continuous view sequence. This task is quite challenging given that the evaluation of photo quality is under-determined and usually subjective. Motivated by the recent prevalence of online media, we present a solution y mining the underlying knowledge and experience of the photographers from massively crawled professional photos (about 100,000 images, which are highly ranked by users) of those popular photo sharing websites, e.g. Flickr.com. Generally far contexts are critical in characterizing the composition rules for professional photos, and thus we present a method called omni-range context modeling to learn the patch/object spatial correlation distribution for the concurrent patch/object pair of arbitrary distance. The learned photo omni-range context priors then serve as rules to guide the composition of professional photos. When a wide view is fed into the system, these priors are utilized together with other cues (e.g., placements of faces at different poses, patch number, etc) to form a posterior probability formulation for professional sub-view finding. Moreover, this system can function as intelligent professionalview guider based on real-time view quality assessment and the embedded compass (for recording capture direction). Beyond the salient areas targeted by most existing view recommendation algorithms, the proposed system targets at professional photo composition. Qualitative experiments as well as comprehensive user studies well demonstrate the validity and efficiency of the proposed omnirange context learning method as well as the automatic view finding framework. © 2010 ACM.

PubMed | University Hospital challon, CS and CEA Grenoble
Type: Journal Article | Journal: Annales d'endocrinologie | Year: 2014

Radioiodine is currently used routinely in the treatment of hyperthyroidism including Graves disease (GD), toxic multinodular goitre (TMNG) and toxic solitary nodule (TSN) but no consensus exists on the most appropriate way to prescribe iodine--fixed dose or calculated doses based on the gland size or turnover of (131)I. We carried out the first nationwide French survey assessing the current practices in radioiodine treatment of hyperthyroidism.A questionnaire was sent to French nuclear medicine hospital units and cancer treatment centres (n=69) about their practices in 2012.Euthyroidism was considered the successful outcome for 33% of respondents, whereas hypothyroidism was the aim in 26% of cases. Fixed activities were the commonest therapeutic approach (60.0% of GD prescribed doses and 72.5% for TMNG and TSN), followed by calculated activities from Marinellis formula (based on a single uptake value and thyroid volume). The fixed administered dose was chosen from between 1 to 3 levels of standard doses, depending on the patient characteristics. Factors influencing this choice were disease, with a median of 370 MBq for GD and 555 MBq for TSN and TMNG, thyroid volume (59%) and uptake (52%) with (131)I or (99m)Tc. Even physicians using fixed doses performed pretherapeutic thyroid scan (98%).This study shows that practices concerning the prescription of (131)I therapeutic doses are heterogeneous. But the current trend in France, as in Europe, is the administration of fixed doses. The study provides the baseline data for exploring the evolution of French clinical practices.

Picart G.,French National Center for Space Studies | Dosogne T.,CS | Smith M.,BART
13th International Conference on Space Operations, SpaceOps 2014 | Year: 2014

The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use. All the operations carried out on a spacecraft need to be logged somewhere. Such daily activity is essential to get a macroscopic view of the maintenance activities, especially for monthly or yearly reporting. Up to now the list of everyday activities carried out on CNES Earth observation spacecraft of the French Space Agency (CNES) has been filled manually by each Flight Control Team (FCT) in an Excel® file. However the accuracy of the information depends a lot on the proficiency of the FCT members involved in the process. Obviously there is always the risk of human error. For example, forgetting to log an operation or typing errors can and sometimes do lead to inaccurate overall reporting. Hence the need to improve the general process of data gathering. This paper deals with a new foolproof method which has been tested successfully on the Pleiades spacecraft (PHR1A and PHR1B) in order to automatically compile spacecraft activities from telecommand logbooks which are huge XML files (around 10 million lines per year). This kind of issue falls typically within a data mining approach and algorithms have been implemented to filter the various data inside the logbooks and extract the relevant information out of them. Output files are simple ASCII tab separated files which list the main operations performed during the period under consideration. These files may be either edited with Excel® (to benefit from its world renowned filtering capabilities) or plotted as a chronogram with PrestoPlot® which is a COTS already used in CNES for displaying telemetry parameters. Such chronograms are particularly useful for people in charge of spacecraft maintenance because they help them easily establish a potential link between spacecraft operations and telemetry behavior, especially for trend analysis. Activity files are generated every month, but they can be generated for a shorter period. The monthly files are also concatenated to build overall activity files which gather information from the beginning of life of each spacecraft. This process has been tested and tuned during PHR1B in orbit test phase beginning of 2013 and then successfully retrofitted to PHR1A. An activity data base is also a great help for the FCT because any operation performed on a spacecraft can be found again very easily. Finally this innovative system is adaptive and may be applied very easily to any existing spacecraft program.

Hoffman J.,University of California at Berkeley | Guadarrama S.,University of California at Berkeley | Tzeng E.,University of California at Berkeley | Hu R.,Tsinghua University | And 4 more authors.
Advances in Neural Information Processing Systems | Year: 2014

A major challenge in scaling object detection is the difficulty of obtaining labeled images for large numbers of categories. Recently, deep convolutional neural networks (CNNs) have emerged as clear winners on object classification benchmarks, in part due to training with 1.2M+ labeled classification images. Unfortunately, only a small fraction of those labels are available for the detection task. It is much cheaper and easier to collect large quantities of image-level labels from search engines than it is to collect detection data and label it with precise bounding boxes. In this paper, we propose Large Scale Detection through Adaptation (LSDA), an algorithm which learns the difference between the two tasks and transfers this knowledge to classifiers for categories without bounding box annotated data, turning them into detectors. Our method has the potential to enable detection for the tens of thousands of categories that lack bounding box annotations, yet have plenty of classification data. Evaluation on the ImageNet LSVRC-2013 detection challenge demonstrates the efficacy of our approach. This algorithm enables us to produce a >7.6K detector by using available classification data from leaf nodes in the ImageNet tree. We additionally demonstrate how to modify our architecture to produce a fast detector (running at 2fps for the 7.6K detector). Models and software are available at lsda.berkeleyvision.org.

Youssefi D.,CS | Michel J.,French National Center for Space Studies | Grizonnet M.,French National Center for Space Studies
Revue Francaise de Photogrammetrie et de Teledetection | Year: 2015

Segmentation is a widely used operation in very high resolution remote sensing processing such as object based image analysis. Since the available memory resource might be limited, it is often impossible to process a whole satellite image without using piece-wise processing, which in the case of segmentation introduces a huge amount of artifacts. The work presented in this paper introduce a solution to this problem in the case of the Mean-Shift algorithm, which guarantees that piece-wise segmentation results matches exactly those of full image processing at once. First, we define a new property of segmentation algorithms called stability. After proposing a methodology to measure the stability of segmentation algorithms, we demonstrate that among the Mean-Shift, watershed and connected components algorithms, only the latter is stable. Then, we propose a stabilized version of the Mean-Shift algorithm and use it to build a rigorous and exact solution for piece-wise processing of this algorithm. Last, we present some examples demonstrating the usefulness of the proposed method. This method is available in the Orfeo ToolBox free software, and documented in the software guide.

Sengissen A.,Airbus | Giret J.-C.,CS | Coreixas C.,European Center for Research and Advanced Training in Scientific Computation | Boussuge J.-F.,European Center for Research and Advanced Training in Scientific Computation
21st AIAA/CEAS Aeroacoustics Conference | Year: 2015

This paper aims at investigating and analyzing numerical simulations of landing-gear configurations of increasing complexity using the Lattice-Boltzmann solver "LaBS". The LAGOON (LAnding-Gear nOise database for CAA validatiON) project, supported by Air- bus,1, 2 provides an accurate experimental database on simplified landing-gear configura- tions perfectly suitable for this purpose. First, an assessment of the numerical approach accuracy is carried out on LAGOON1 configuration by comparing both aerodynamic and near-field acoustic results with the LAGOON database disclosed in the frame of the NASA BANC workshop. Then, further investigations are focused on the inuence of mesh refinement, subgrid scale model and wall law parameters. Finally, the best practices obtained are applied on LAGOON2 & 3 configurations and allow to capture the impact of some geometrical components added onto LAGOON1 baseline. © 2015, American Institute of Aeronautics and Astronautics Inc, AIAA. All Rights Reserved.

Li W.Y.-H.,CS | Huang C.-L.,CS | Chung C.-P.,CS
Proceedings of the International Conference on Parallel Processing | Year: 2011

An execute-ahead processor pre-executes instructions when a load miss would stall the processor. The typical design has several components that grow with the distance to execute ahead and need to be carefully balanced for optimal performance. This paper presents a novel approach which unifies those components and therefore is easy to implement and has no trouble to balance resource investment. When executing ahead, the processor enqueues (or preserves) all instructions along with the known execution results (including register and memory) in a preserving buffer (PB). When the leading load miss is resolved, the processor dequeues the instructions and then restores the known execution results or dispatch the instructions not yet executed. The implementation overheads include PB and a runahead cache for forwarding memory data. Only PB grows with the distance to execute ahead. This method can be applied to both in-order and out-of-order processors. Our experiments show that a four-way superscalar out-of-order processor with a 1 K-entry PB can have 15% and 120% speedup over the baseline design for SPEC INT2000 and SPEC FP2000 benchmark suites, assuming a 128-entry instruction window and a 300-cycle memory access latency. © 2011 IEEE.

Ward E.M.,Washington Technology | Warner J.G.,Washington Technology | Maisonobe L.,CS
AIAA/AAS Astrodynamics Specialist Conference 2014 | Year: 2014

Open source software tools have been gaining acceptance in the astrodynamics community for some applications, though heritage tools still dominate precision orbit determination and propagation. This paper examines recent tide modeling improvements in the open source Orbit Extrapolation Toolkit (Orekit) and compares it with the US Naval Research Laboratory's (NRL) heritage Orbit Covariance Estimation And ANalysis (OCEAN) system. First, the two tools are compared directly against each other by propagating a given state vector for Stella, a geodetic satellite sensitive to tidal variations in the geopotential. Second, orbits were fit to International Laser Ranging Service (ILRS) laser ranging data using OCEAN and orbit determination software built around Orekit so that a more useful comparison could be made. Five days of data were used to solve for orbital parameters using OCEAN and Orekit. This solution orbit is then propagated forward 25 days and compared to subsequent five day orbit solutions. This comparison between predicted and fitted orbit solutions is used as a metric to compare the quality of each piece of software's dynamic modeling capability. Results from the direct orbit propagation comparison indicate the RSS of postion difference between the OCEAN and Orekit propagated orbit grow to only 7 meters over 25 days. It is also seen that the difference between OCEAN's and Orekit's implementation of Earth tides are less than 3% of the total tidal effect. The results of the orbit determination analysis show that the Orekit orbit solution comparison is at worst on the same order of magnitude in accuracy as the OCEAN orbit solution comparison, and at best more accuate than the OCEAN orbit solution comparison. While OCEAN produces a more accurate orbit prediction than Orekit in the majority of the cases studied, more testing is need to understand the origin of the difference.

PubMed | Siemens AG, A.P.S. University, University of Wisconsin - Madison and C.S. .
Type: Journal Article | Journal: AJNR. American journal of neuroradiology | Year: 2016

Volume-of-interest C-arm CT is a novel technique for imaging of intracranial high-contrast objects. We performed this study to evaluate the potential diagnostic value and radiation dose reduction of this technique for imaging of intracranial stents and flow diverters.Twenty-seven patients were imaged with a VOI C-arm CT scan following treatment with a flow diverter or stent-assisted coiling. The radiation dose-area product was recorded for VOI scans. For comparison, the dose-area product from 30 previously acquired consecutive full-view DynaCTs was used. Thermoluminescence dosimetry by using 35 evenly distributed thermoluminescence dosimeters in an anthropomorphic head phantom was also performed by using both conventional full field and VOI acquisitions. Three observers were presented with VOI images for assessment of the potential diagnostic value.The dose-area product measurements showed an exposure reduction of 85% compared with the full field acquisitions used for comparison. The thermoluminescence dosimetry evaluations also showed a considerable dose reduction of 79.8% throughout the volume. For most of the evaluated cases, the observers thought that diagnostically useful information was provided by the VOI images ( = .810). Visualization of device details, such as the extent of opening, positioning, wall apposition, and aneurysm coverage, was judged of good diagnostic quality for most cases (88.9%-92.6%).In this study, VOI C-arm CT provided high-quality diagnostic images of intracranial stents and flow diverters at a dramatic reduction of radiation exposure. Image content was thought to add useful information. It is a promising method to assess device status during procedures and at follow-up.

PubMed | CS
Type: Journal Article | Journal: Journal of chemical ecology | Year: 2013

The sex pheromone emitted by individual calling females of the oriental fruit moth,Grapholita molesta, was trapped within glass capillaries, and the composition and release rates were determined by gas chromatography-mass spectrometry. Aerial release of (Z)-8-dodecenyl acetate ranged up to 25.3 ng/hr, while the mean release rate was 8.48 7.26 ng/hr (SD). The proportion of (E)-8-dodecenyl acetate to (Z)-8-dodecenyl acetate was remarkably constant (4.20 0.60%). Significant amounts of dodecyl acetate were also recovered but, contrary to previous reports, only trace quantities of (Z)-8-dodecenol were detected in the effluvium.

Loading CS collaborators
Loading CS collaborators