Time filter

Source Type

Nishi-Tokyo-shi, Japan

Sun Y.,Northeastern University China | Mao Z.,Northeastern University China | Gen M.,Waseda University | Zheng G.,JANA Solutions Inc. | Cheng R.,JANA Solutions Inc.
International Journal of Innovative Computing, Information and Control

With the expenditure proliferating and an ever-growing competitive mar- ketplace, companies have given more attention to optimizing their advertising budget. Advertising budget problem appears such as how to determine the total budget level for an advertising campaign. In this paper, an optimization method is proposed to deal with the advertising budget problem of mixed-media advertising. Different from the existing methods which take the maximal advertising effect as objectives, the proposed method for- mulates advertising budget problem as a nonlinear programming problem with the objec- tive of minimizing advertising budget under the constraint on response goal achievement. A novel media mix model is constructed to deal with the duplication among selected me- dia. Furthermore, a Bayesian estimation method dedicated to parameterizing the media mix model is proposed. A genetic algorithm is adopted to find the optimal solution of advertising budget problem. Finally, a case study on real project is presented to illustrate the effectiveness and efficiency of the proposed method. ICIC International © 2010. Source

Shao H.,China University of Petroleum - East China | Zheng G.,JANA Solutions Inc.

In this paper, the convergence of a new back-propagation algorithm with adaptive momentum is analyzed when it is used for training feedforward neural networks with a hidden layer. A convergence theorem is presented and sufficient conditions are offered to guarantee both weak and strong convergence result. Compared with existing results, our convergence result is of deterministic nature and we do not require the error function to be quadratic or uniformly convex. © 2010 Elsevier B.V. Source

Shao H.,China University of Petroleum - East China | Zheng G.,JANA Solutions Inc.

In this paper, the deterministic convergence of an online gradient method with penalty and momentum is investigated for training two-layer feedforward neural networks. The monotonicity of the new error function with the penalty term in the training iteration is firstly proved. Under this conclusion, we show that the weights are uniformly bounded during the training process and the algorithm is deterministically convergent. Sufficient conditions are also provided for both weak and strong convergence results. © 2010 Elsevier B.V. Source

Discover hidden collaborations