Time filter

Source Type

Östermalm, Sweden

Bokrantz R.,KTH Royal Institute of Technology | Bokrantz R.,RaySearch Laboratories | Forsgren A.,KTH Royal Institute of Technology
INFORMS Journal on Computing | Year: 2013

We consider the problem of approximating Pareto surfaces of convex multicriteria optimization problems by a discrete set of points and their convex combinations. Finding the scalarization parameters that optimally limit the approximation error when generating a single Pareto optimal solution is a nonconvex optimization problem. This problem can be solved by enumerative techniques but at a cost that increases exponentially with the number of objectives. We present an algorithm for solving the Pareto surface approximation problem that is practical with 10 or less conflicting objectives, motivated by an application to radiation therapy optimization. Our enumerative scheme is, in a sense, dual to a family of previous algorithms. The proposed technique retains the quality of the best previous algorithm in this class while solving fewer subproblems. A further improvement is provided by a procedure for discarding subproblems based on reusing information from previous solves. The combined effect of the enhancements is empirically demonstrated to reduce the computational expense of solving the Pareto surface approximation problem by orders of magnitude. For problems where the objectives have positive curvature, an improved bound on the approximation error is demonstrated using transformations of the initial objectives with strictly increasing and concave functions. © 2013 INFORMS. Source

Wedenberg M.,Karolinska Institutet | Wedenberg M.,RaySearch Laboratories | Toma-Dasu I.,Karolinska Institutet
Medical Physics | Year: 2014

Purpose: Currently in proton radiation therapy, a constant relative biological effectiveness (RBE) equal to 1.1 is assumed. The purpose of this study is to evaluate the impact of disregarding variations in RBE on the comparison of proton and photon treatment plans. Methods: Intensity modulated treatment plans using photons and protons were created for three brain tumor cases with the target situated close to organs at risk. The proton plans were optimized assuming a standard RBE equal to 1.1, and the resulting linear energy transfer (LET) distribution for the plans was calculated. In the plan evaluation, the effect of a variable RBE was studied. The RBE model used considers the RBE variation with dose, LET, and the tissue specific parameter α /β of photons. The plan comparison was based on dose distributions, DVHs and normal tissue complication probabilities (NTCPs). Results: Under the assumption of RBE = 1.1, higher doses to the tumor and lower doses to the normal tissues were obtained for the proton plans compared to the photon plans. In contrast, when accounting for RBE variations, the comparison showed lower doses to the tumor and hot spots in organs at risk in the proton plans. These hot spots resulted in higher estimated NTCPs in the proton plans compared to the photon plans. Conclusions: Disregarding RBE variations might lead to suboptimal proton plans giving lower effect in the tumor and higher effect in normal tissues than expected. For cases where the target is situated close to structures sensitive to hot spot doses, this trend may lead to bias in favor of proton plans in treatment plan comparisons. © 2014 American Association of Physicists in Medicine. Source

Fredriksson A.,KTH Royal Institute of Technology | Fredriksson A.,RaySearch Laboratories
Physics in Medicine and Biology | Year: 2012

A method is presented that automatically improves upon previous treatment plans by optimization under reference dose constraints. In such an optimization, a previous plan is taken as reference and a new optimization is performed toward some goal, such as minimization of the doses to healthy structures under the constraint that no structure can become worse off than in the reference plan. Two types of constraints that enforce this are discussed: either each voxel or each dose-volume histogram of the improved plan must be at least as good as in the reference plan. These constraints ensure that the quality of the dose distribution cannot deteriorate, something that constraints on conventional physical penalty functions do not. To avoid discontinuous gradients, which may restrain gradient-based optimization algorithms, the positive part operators that constitute the optimization functions are regularized. The method was applied to a previously optimized plan for a C-shaped phantom and the effects of the choice of regularization parameter were studied. The method resulted in reduced integral dose and reduced doses to the organ at risk while maintaining target homogeneity. It could be used to improve upon treatment plans directly or as a means of quality control of plans. © 2012 IOP Publishing Ltd. Source

Bokrantz R.,KTH Royal Institute of Technology | Bokrantz R.,RaySearch Laboratories
Physics in Medicine and Biology | Year: 2013

We consider multicriteria radiation therapy treatment planning by navigation over the Pareto surface, implemented by interpolation between discrete treatment plans. Current state of the art for calculation of a discrete representation of the Pareto surface is to sandwich this set between inner and outer approximations that are updated one point at a time. In this paper, we generalize this sequential method to an algorithm that permits parallelization. The principle of the generalization is to apply the sequential method to an approximation of an inexpensive model of the Pareto surface. The information gathered from the model is sub-sequently used for the calculation of points from the exact Pareto surface, which are processed in parallel. The model is constructed according to the current inner and outer approximations, and given a shape that is difficult to approximate, in order to avoid that parts of the Pareto surface are incorrectly disregarded. Approximations of comparable quality to those generated by the sequential method are demonstrated when the degree of parallelization is up to twice the number of dimensions of the objective space. For practical applications, the number of dimensions is typically at least five, so that a speed-up of one order of magnitude is obtained. © 2013 Institute of Physics and Engineering in Medicine. Source

Wedenberg M.,Karolinska Institutet | Wedenberg M.,RaySearch Laboratories
International Journal of Radiation Oncology Biology Physics | Year: 2013

Purpose: To apply a statistical bootstrap analysis to assess the uncertainty in the dose-response relation for the endpoints pneumonitis and myelopathy reported in the QUANTEC review. Methods and Materials: The bootstrap method assesses the uncertainty of the estimated population-based dose-response relation due to sample variability, which reflects the uncertainty due to limited numbers of patients in the studies. A large number of bootstrap replicates of the original incidence data were produced by random sampling with replacement. The analysis requires only the dose, the number of patients, and the number of occurrences of the studied endpoint, for each study. Two dose-response models, a Poisson-based model and the Lyman model, were fitted to each bootstrap replicate using maximum likelihood. Results: The bootstrap analysis generates a family of curves representing the range of plausible dose-response relations, and the 95% bootstrap confidence intervals give an estimated upper and lower toxicity risk. The curve families for the 2 dose-response models overlap for doses included in the studies at hand but diverge beyond that, with the Lyman model suggesting a steeper slope. The resulting distributions of the model parameters indicate correlation and non-Gaussian distribution. For both data sets, the likelihood of the observed data was higher for the Lyman model in >90% of the bootstrap replicates. Conclusions: The bootstrap method provides a statistical analysis of the uncertainty in the estimated dose-response relation for myelopathy and pneumonitis. It suggests likely values of model parameter values, their confidence intervals, and how they interrelate for each model. Finally, it can be used to evaluate to what extent data supports one model over another. For both data sets considered here, the Lyman model was preferred over the Poisson-based model. © 2013 Elsevier Inc. Source

Discover hidden collaborations