看板 AfterPhD 關於我們 聯絡資訊
我希望科技部把蔣前部長這兩篇文章印出來比對一下 文章A: Chen, Chen-Wu, Po-Chen Chen, and Wei-Ling Chiang. "Modified intelligent genetic algorithm-based adaptive neural network control for uncertain structural systems." Journal of Vibration and Control 19.9 (2013): 1333-1347. 文章B: Chen, C. W., P. C. Chen, and W. L. Chiang. "Stabilization of adaptive neural network controllers for nonlinear structural systems using a singular perturbation approach." Journal of Vibration and Control 17.8 (2011): 1241-1252. 很明顯*至少*是self-plagiarism (這也是違反學術倫理的抄襲) 蔣前部長不要再說自己沒抄襲了啦 臉會很腫的 因為數學式子難顯示, 我只節錄這兩篇paper的Introduction的幾個(連續)段落供比較 文章A: ...Many NN systems, which are essentially intelligent inference systems implemented in the framework of adaptive networks, have been developed to model or control nonlinear plants with remarkable results. The desired performance can be obtained with fewer adjustable parameters, although sometimes more training is required to achieve the higher accuracy derived from the transfer function and the learning algorithm. In addition to these features, NNs also act as a universal approximator (Hartman et al., 1990; Funahashi and Nakamura, 1993) where the feedforward network isvery important. A backpropagation algorithm (Hecht-Nielsen, 1989; Ku and Lee, 1995), is usually used in the feedforward type of NN but heavy and complicated learning is needed to tune each network weight. Aside from the backpropagation type of NN, another common feedforward NN is the radial basis function network (RBFN) (Powell, 1987, 1992; Park and Sandberg, 1991). 文章B: ...Many NN systems, which are essentially intelligent inference systems implemented in the framework of adaptive networks, have been developed to model or control nonlinear plants, with remarkable results. The desired performance can be obtained with fewer adjustable parameters, although sometimes more training derived from the transfer function and the learning algorithm is needed to achieve sufficient accuracy. In addition, NN also acts as a universal approximator so the feedforward network is very important (Hartman et al., 1990; Funahashi and Nakamura, 1993). A backpropagation algorithm is usually used in the feedforward type of NN, but this necessitates heavy and complicated learning to tune each network weight (Hecht-Nielsen, 1989; Ku and Lee, 1995). Besides the backpropagation type of NN, another common feedforward NN is the radial basis function network (RBFN) (Powell, 1987, 1992; Park and Sandberg, 1991). 文章A: RBFNs use only one hidden layer. The transfer function of the hidden layer is a nonlinear semi-affine function. Obviously, the learning rate of the RBFN will be faster than that of the backpropagation network. Furthermore, the RBFN can approximate any nonlinear continuous function and eliminate local minimum problems (Powell, 1987, 1992; Park and Sandberg, 1991). These features mean that the RBFN is usually used for real-time control in nonlinear dynamic systems. Some results indicate that, under certain mild function conditions, the RBFN is capable of universal approximations (Park and Sandberg, 1991; Powell, 1992). 文章B: The RBFN requires the use of only one hidden layer, and the transfer function for the hidden layer is a nonlinear semi-affine function. Obviously, the learning rate will be faster than that of the backpropagation network. Furthermore, one can approximate any nonlinear continuous function and eliminate local minimum problems with this method (Powell, 1987, 1992; Park and Sandberg, 1991). Because of these features, this technique is usually used for real-time control in nonlinear dynamic systems. Some results indicate that, under certain mild function conditions, the RBFN is even capable of universal approximations (Park and Sandberg, 1991; Powell, 1992). 文章A: Adaptive algorithms can be utilized to find the best high-performance parameters for the NN (Goodwin and Sin, 1984; Sanner and Slotine, 1992). Adaptive laws have been designed for the Lyapunov synthesis approach to tune the adjustable parameters of the RBFN, and analyze the stability of the overall system. A genetic algorithm (GA) (Goldberg, 1989; Chen, 1998), is the usual optimization technique used in the self-learning or training strategy to decide the initial values of the parameter vector. This GA-based modified adaptive neural network controller (MANNC) should improve the immediate response, the stability, and the robustness of the control system 文章B: Adaptive algorithms can be utilized to find the best high-performance parameters for the NN. The adaptive laws of the Lyapunov synthesis approach are designed to tune the adjustable parameters of the RBFN, and analyze the stability of the overall system. A genetic algorithm (GA) is the usual optimization technique used in the self-learning or training strategy to decide the initial values included in the parameter vector (Goldberg, 1989; Chen, 1998). The use of a GA-based adaptive neural network controller (ANNC) should improve the immediate response, stability, and robustness of the control system. 文章A: Another common problem encountered when switching the control input of the sliding model system is the so-called "chattering" phenomenon. The smoothing of control discontinuity inside a thin boundary layer essentially acts as a low-pass filter structure for the local dynamics, thus eliminating chattering (Utkin, 1978; Khalil, 1996). The laws are updated by the introduction of a boundary-layer function to cover parameter errors and modeling errors, and to guarantee that the state errors converge within a specified error bound. 文章B: Another common problem encountered when switching the control input of the sliding model system is the so-called “chattering” phenomenon. Sometimes the smoothing of control discontinuity inside a thin boundary layer essentially acts as a low-pass filter structure for the local dynamics, thus eliminating chattering (Utkin, 1978; Khalil, 1996). The laws for this process are updated by the introduction of a boundary-layer function to cover parameter errors and modeling errors. This also guarantees that the state errors converge within a specified error bound. 這不是抄襲,什麼才是抄襲? 延伸閱讀: The ethics of self-plagiarism http://cdn2.hubspot.net/hub/92785/file-5414624-pdf/media/ith-selfplagiarism-whitepaper.pdf Self-Plagiarism is defined as a type of plagiarism in which the writer republishes a work in its entirety or reuses portions of a previously written text while authoring a new work. -- ※ 文章網址: http://www.ptt.cc/bbs/AfterPhD/M.1405463371.A.681.html
jhyen:其他的不要說,光這60篇被JVC退的找出來看就很精彩....... 07/16 06:39
bmka:第二篇沒在這被查出的60篇裡面喔!看來未爆彈還很多 07/16 07:23
※ 編輯: bmka (68.49.100.176), 07/16/2014 07:59:12
MyDice:科技部不會查這些 只能向JVC反應了 07/16 08:10
wacomnow:推用心!記者快來抄呀 07/16 08:19
WTFCAS:鍵盤又輸入錯誤了… 07/16 08:57
flashegg:第二篇(2011較早的這篇)沒在這被查出的60篇裡面 07/16 10:42
flashegg:表示有可能是經過真的學者審查通過的吧? 07/16 10:42
flashegg:然後2013這篇因為self-plagiarism,所以不敢被審? 07/16 10:43
flashegg:才套假帳號然後被JVC接受刊出,以上是個人看法 07/16 10:44
bmka:那就要問蔣偉寧了..他只能抄襲跟完全沒看過paper二選一了 07/16 10:49
flashegg:再來CW Chen可以辯稱2013這篇是2011的續作 07/16 10:50
bmka:我猜應該還有其他的paper是套用同一個模板寫出來的 07/16 10:50
flashegg:因為同樣受到蔣偉寧指導才把老師掛上去 07/16 10:50
flashegg:重複段落多也可以用自己便宜行事出面道歉了事 07/16 10:51
bmka:就算是續作,也不可以self-plagiarism,這是常識吧 07/16 10:51
flashegg:總之這種self-plagiarism在理工科paper不是沒有見過 07/16 10:51
flashegg:最後也是被系/院教評會發還重審,不了了之 07/16 10:52
bmka:抄襲就是抄襲,學術界自有評論 :) 07/16 10:54
flashegg:而且要是CW Chen出來坦,說沒經過蔣同意就把老師掛上去 07/16 10:55
flashegg:純粹只是因為受過老師指導、或尊重老師等等 07/16 10:55
flashegg:還怕蔣不能安全下莊嗎?這也是個人看法~ 07/16 10:55
bmka:不告知掛個一篇那也就罷了,這麼多年來掛了一堆,還不知被掛 07/16 10:56
bmka:然後CV上還大大方方的登錄...很難說得過去的 07/16 10:57
bmka:其實我的猜測是蔣偉寧根本沒看過這些文章(貢品),只是他不敢 07/16 10:58
bmka:承認這些不是他的research,他違反學術倫理掛了名 07/16 10:58
bmka:但是敢收學生的貢品就要敢扛啊,不能出事就推給學生 07/16 10:59
flashegg:這就是在道德操守與人性上打轉啦 07/16 11:03
flashegg:假設CW Chen真的是在蔣不知情的狀況下把老師掛上去 07/16 11:03
flashegg:paper被接受之後才跟老師說有掛名一事 07/16 11:04
flashegg:有多少老師會說,不行你馬上把我的名字撤掉? 07/16 11:04
flashegg:我想還是會欣然接受的人比較多吧,還會覺得學生懂事呢 07/16 11:05
bmka:那還是蔣的錯,正常的處理方式應該是嚴正警告學生不可以如此做 07/16 11:05
bmka:這種事以後不可以發生 07/16 11:05
flashegg:我並非贊同蔣的行為,只是想說這種事真的是屢見不鮮 07/16 11:07
bmka:學術圈很小,自己的名聲自己顧,何況是像蔣這種大咖 07/16 11:07
flashegg:學術界不能說的秘密,挖下去恐怕粽子一大串 07/16 11:07
bmka:我也了解屢見不鮮,但是敢做,出了事就別想卸責,如此而已 07/16 11:08
bmka:要不是蔣一直卸責,我也懶得浪費時間看他們的廢文(越看越氣) 07/16 11:19
bmka:還有,蔣也未免太饑不擇食,這種三流期刊的paper也要掛 07/16 11:23
tainanuser:推,很用心! 07/16 11:42
MyDice:可以從科技部或是蔣的網頁看到他2010年以來的publication 07/16 12:05
MyDice:有多少嗎? 尤其是當校長部長這段期間論文任意掛名的情況有 07/16 12:07
MyDice:多嚴重 07/16 12:07
ceries:厲害! 07/16 14:53
jabari:請問這個可以推給學運嗎? 還是八年遺毒?? 07/16 16:27
jack5756:真的都是學運的錯,而且很多Paper是八年遺毒 07/16 17:09
isaacc:這不是抄襲,什麼才是抄襲? 真是太無恥了! 07/16 19:30
yuyuna:照陳氏兄弟邏輯,這只是「敲鍵錯誤」 07/16 19:31
willynn:(拜) 07/17 08:37
jhyen:看來未爆彈還不少~~ 07/17 09:19
willynn:我以為隔了兩年看自己的文章只會越看越不滿,進而大幅改寫 07/17 10:00
chrisZ:有辦法下載到 Modified 開頭的那篇論文嗎? 滿想看一下 07/19 02:14