本文目录一览

1,如何用minitab 进行可靠性分析

统计-可靠性/生存,根据自己的实验或者数据进行分析即可。

如何用minitab 进行可靠性分析

2,如何进行软件可靠性建模常用的图软件可靠性模型有哪几种

使用ANSYS进行建模,然后使用ANSYS的PDS模块进行可靠性分析就可以了。
没看懂什么意思?

如何进行软件可靠性建模常用的图软件可靠性模型有哪几种

3,地质勘查项目项目预算的合理性及可靠性分析包括哪些内容

钻探费用,资料审核,协调,还有协调公司和项目组之间 的利润关系
我。。知。。道加。。我。。私。。聊

地质勘查项目项目预算的合理性及可靠性分析包括哪些内容

4,可靠性软件哪个软件好用

Adobe Photoshop CS3比较好一点、
你好!FMEA 可靠性分析这个不用软件把 你多看看别人公司的质量方面的资料就好了啊 尤其是哪些知名品牌的公司如有疑问,请追问。
Isograph

5,系统可靠性分析都怎么做啊希望大牛说下流程

由系统可靠性数据(或者说失效数据)拟合出威布尔分布(或者已知是威布尔分布并且参数给出),然后简单说威布尔分布就是描述的失效概率密度。P(T>t)就是说系统运行时间这一随机变量大于t的概率,也即寿命大于t的概率,是对密度函数的积分,下限t,上限无穷。输入威布尔分布参数,输出概率,或者已知概率,输出时间,看你具体问题了,matlab可以编,minitab也可以

6,求一段关于可靠性分析的英文材料

可靠性分析 reliability analysis http://www.fmeainfocentre.com/presentations/dfr9.pdfhttp://www.google.com.sg/search?q=reliability+analysis+&hl=zh-CN&start=30&sa=Nhttp://www.statisticssolutions.com/reliability-analysis对于大部分工程可靠性分析中的隐式极限状态方程,建立一种基于样本筛选的神经网络可靠性分析方法。该方法利用具有强大的非线性映射能力的神经网络,来近似隐式极限状态方程中的输入变量与输出变量的关系,从而使得隐式可靠性分析转化为显示可靠性分析问题,大大减少了计算失效概率的工作量。与已有的神经网络可靠性分析方法相比,所提方法选择近似极限状态方程而不是近似极限状态函数的策略,并通过筛选训练样本实现这一策略,从而提高可靠性分析的精度,算例结果充分显示所提方法的优越性。limit state equations in most engineering reliability analysis.a flew method is presented on the basis of arti—ficial neural network(ANN),where the training samples are appropriately selected.Due to the powefd tool offunction approximation,ANN is employed to obtain the relationship ofthe input paamneters and the output parameters in the implicit limit state i~luation.The re—liability analysis for the implicit limit state is t......
你可以在网上搜集你需要的材料,然后再在网上翻译就可以了。
Reliability AnalysisMeasures of ReliabilityReliability: the fact that a scale should consistently reflect the construct it is measuring.One way to think of reliability is that other things being equal, a person should get the samescore on a questionnaire if they complete it at two different points in time (test-retestreliability. Another way to look at reliability is to say that two people who are the same interms of the construct being measured, should get the same score. In statistical terms, theusual way to look at reliability is based on the idea that individual items (or sets of items)should produce results consistent with the overall questionnaire.The simplest way to do this is in practice is to use split half reliability. This method randomlysplits the data set into two. A score for each participant is then calculated based on each halfof the scale. If a scale is very reliable a persons score on one half of the scale should be thesame (or similar) to their score on the other half: therefore, across several participants scoresfrom the two halves of the questionnaire should correlate perfectly (well, very highly). Thecorrelation between the two halves is the statistic computed in the split half method, with largecorrelations being a sign of reliability. The problem with this method is that there are severalways in which a set of data can be split into two and so the results could be a product of theway in which the data were split. To overcome this problem, Cronbach (1951) came up with ameasure that is loosely equivalent to splitting data in two in every possible way and computingthe correlation coefficient for each split. The average of these values is equivalent toCronbachs alpha, α, which is the most common measure of scale reliability (This is aconvenient way to think of Cronbachs alpha but see Field, 2005, for a more technically correctexplanation).There are two versions of alpha: the normal and the standardized versions. The normal alphais appropriate when items on a scale are summed to produce a single score for that scale (thestandardized α is not appropriate in these cases). The standardized alpha is useful thoughwhen items on a scale are standardized before being summed.Interpreting Cronbachs α (some cautionary tales …)Youll often see in books, journal articles, or be told by people that a value of 0.7-0.8 is anacceptable value for Cronbachs alpha; values substantially lower indicate an unreliable scale.Kline (1999) notes that although the generally accepted value of 0.8 is appropriate forcognitive tests such as intelligence tests, for ability tests a cut-off point of 0.7 if more suitable.He goes onto say that when dealing with psychological constructs values below even 0.7 can,realistically, be expected because of the diversity of the constructs being measured.However, Cortina (1993) notes that such general guidelines need to be used with cautionbecause the value of alpha depends on the number of items on the scale (see Field, 2005 fordetails).Alpha is also affected by reverse scored items. For example, in our SAQ from last week we hadone item (question 3) that was phrased the opposite way around to all other items. The itemwas standard deviations excite me. Compare this to any other item and youll see it requiresthe opposite response. For example, item 1 is statistics make me cry. Now, if you dont likestatistics then youll strongly agree with this statement and so will get a score of 5 on ourscale. For item 3, if you hate statistics then standard deviations are unlikely to excite you soyoull strongly disagree and get a score of 1 on the scale. These reverse phrased items areimportant for reducing response bias) participants will actually have to read the items in casethey are phrased the other way around. In reliability analysis these reverse scored items makea difference: in the extreme they can lead to a negative Cronbachs alpha! (see Field, 2005 formore detail).Therefore, if you have reverse phrased items then you have to also reverse the way in whichtheyre scored before you conduct reliability analysis. This is quite easy. To take our SAQ data,we have one item which is currently scored as 1 = strongly disagree, 2 = disagree, 3 =neither, 4 = agree, and 5 = strongly agree. This is fine for items phrased in such a way thatagreement indicates statistics anxiety, but for item 3 (standard deviations excite me),disagreement indicates statistics anxiety. To reflect this numerically, we need to reverse thescale such that 1 = strongly agree, 2 = agree, 3 = neither, 4 = disagree, and 5 = stronglydisagree. This way, an anxious person still gets 5 on this item (because theyd stronglydisagree with it).To reverse the scoring find the maximum value of your response scale (in this case 5) and addone to it (so you get 6 in this case). Then for each person, you take this value and subtractfrom it the score they actually got. Therefore, someone who scored 5 originally now scores 6–5= 1, and someone who scored 1 originally now gets 6–1 = 5. Someone in the middle of thescale with a score of 3, will still get 6–3 = 3! Obviously it would take a long time to do this foreach person, but we can get SPSS to do it for us by using Transform?Compute… (see yourhandout on Exploring data).
Reliability AnalysisMeasures of ReliabilityReliability: the fact that a scale should consistently reflect the construct it is measuring.One way to think of reliability is that other things being equal, a person should get the samescore on a questionnaire if they complete it at two different points in time (test-retestreliability. Another way to look at reliability is to say that two people who are the same interms of the construct being measured, should get the same score. In statistical terms, theusual way to look at reliability is based on the idea that individual items (or sets of items)should produce results consistent with the overall questionnaire.The simplest way to do this is in practice is to use split half reliability. This method randomlysplits the data set into two. A score for each participant is then calculated based on each halfof the scale. If a scale is very reliable a persons score on one half of the scale should be thesame (or similar) to their score on the other half: therefore, across several participants scoresfrom the two halves of the questionnaire should correlate perfectly (well, very highly). Thecorrelation between the two halves is the statistic computed in the split half method, with largecorrelations being a sign of reliability. The problem with this method is that there are severalways in which a set of data can be split into two and so the results could be a product of theway in which the data were split. To overcome this problem, Cronbach (1951) came up with ameasure that is loosely equivalent to splitting data in two in every possible way and computingthe correlation coefficient for each split. The average of these values is equivalent toCronbachs alpha, α, which is the most common measure of scale reliability (This is aconvenient way to think of Cronbachs alpha but see Field, 2005, for a more technically correctexplanation).There are two versions of alpha: the normal and the standardized versions. The normal alphais appropriate when items on a scale are summed to produce a single score for that scale (thestandardized α is not appropriate in these cases). The standardized alpha is useful thoughwhen items on a scale are standardized before being summed.Interpreting Cronbachs α (some cautionary tales …)Youll often see in books, journal articles, or be told by people that a value of 0.7-0.8 is anacceptable value for Cronbachs alpha; values substantially lower indicate an unreliable scale.Kline (1999) notes that although the generally accepted value of 0.8 is appropriate forcognitive tests such as intelligence tests, for ability tests a cut-off point of 0.7 if more suitable.He goes onto say that when dealing with psychological constructs values below even 0.7 can,realistically, be expected because of the diversity of the constructs being measured.However, Cortina (1993) notes that such general guidelines need to be used with cautionbecause the value of alpha depends on the number of items on the scale (see Field, 2005 fordetails).Alpha is also affected by reverse scored items. For example, in our SAQ from last week we hadone item (question 3) that was phrased the opposite way around to all other items. The itemwas standard deviations excite me. Compare this to any other item and youll see it requiresthe opposite response. For example, item 1 is statistics make me cry. Now, if you dont likestatistics then youll strongly agree with this statement and so will get a score of 5 on ourscale. For item 3, if you hate statistics then standard deviations are unlikely to excite you soyoull strongly disagree and get a score of 1 on the scale. These reverse phrased items areimportant for reducing response bias) participants will actually have to read the items in casethey are phrased the other way around. In reliability analysis these reverse scored items makea difference: in the extreme they can lead to a negative Cronbachs alpha!

文章TAG:可靠性分析  如何用minitab  进行可靠性分析  
下一篇