oyster lab
Wain's World
What's Happening in My World
Saturday, November 10, 2012
oyster lab
Committing to the Conversion
Yesterday I attended an inservice for Fairhope and Robertsdale teachers. The speaker was Dr. John H. Strange. Dr. Strange teaches at the University of South Alabama. His challenge to us teachers was to convert. Convert from teachers who want our students to "burp back" information to us, to teachers who teach our students how to "bring out their (the) brain and turn it on".
We are now 2 months into the "Digital Renaissance" in Baldwin County schools. All of my students have a MacBook. Unfortunately, some of them have lost use of their computers due to misuse. The MacBooks have presented their own set of problems. I'm not sure the students are learning any more than before. So maybe I need to change my methods. I don't think I'm approaching it correctly. The only thing I changed was the way the information was presented by relying on the students to spend more time viewing online lessons and resources instead of listening to me and taking notes. I still expected them to "burp back" the information to me in the form of a test. And the test scores didn't seem to be any better than before. And in some cases, as with my freshman biology students, the scores seemed worse. So I have to look at my methods. What am I doing wrong? What could I do differently? What am I missing? So begins my conversion.....
We are now 2 months into the "Digital Renaissance" in Baldwin County schools. All of my students have a MacBook. Unfortunately, some of them have lost use of their computers due to misuse. The MacBooks have presented their own set of problems. I'm not sure the students are learning any more than before. So maybe I need to change my methods. I don't think I'm approaching it correctly. The only thing I changed was the way the information was presented by relying on the students to spend more time viewing online lessons and resources instead of listening to me and taking notes. I still expected them to "burp back" the information to me in the form of a test. And the test scores didn't seem to be any better than before. And in some cases, as with my freshman biology students, the scores seemed worse. So I have to look at my methods. What am I doing wrong? What could I do differently? What am I missing? So begins my conversion.....
Saturday, February 11, 2012
Internal Assessment Lab Report Format
IB Biology Internal Assessment Lab Format
R. McGonegal – Palm Harbor University H.S.
The following titles and subtitles should be used for your lab report and given in
this order within your lab report.
Design
Question – must be focused and not ambiguous in any way
Hypothesis – state first & then give a logical rationale – your conclusion should
address the hypothesis you are giving here
Variables – chart or list identifying Independent, Dependent, & Controlled
Variables
Protocol Diagram – draw & label a diagram which best shows the major
protocol(s) you used. Often this will focus on the technique that was used to
measure the dependent variable and/or the technique that was used to ‘setup’
different increments of the independent variable. Make sure to show how
control group(s) differ from experimental group(s). This is also where I want you
to emphasize the inclusion of a period of time for ‘equilibration’ of equipment,
fluids, organisms, etc. The inclusion of time periods for equilibration should also
be included in your written procedure.
Photograph of Lab Setup – annotate this to show how variables were
instituted, especially the controlled variables. Do not just label equipment.
Procedure – write in paragraph form, passive voice, and past tense
Data Collection and Processing
Raw Data Table – make sure this is raw data only. Data table design & clarity
is important. A title should be given (Raw Data Table is not a data table title, it is
a lab report section title) Make sure that all columns, etc. are properly headed &
units are given. Forgetting one unit or misidentifying one unit is enough to drop
your score in this section. Do not “split” a data table (putting part of a table on
one page and finishing it on another). If you absolutely have to split a table (due
to quantity of data), make sure that you re-do the title and all column headings.
Uncertainties are mandatory and can be given within column headings for
equipment precision and as footnotes beneath data tables for other types of
uncertainties.
Data Processing
Overview – this is a short paragraph section that gives an overview of
how and why you decided to process and present the data in the form
that shows up later in this section.
Sample Calculation – neatly lay out and explain one example only of
any type of manipulation that was done to the raw data to help make it
more useful for interpretation.
Presentation – this is typically one or more data tables (of your now processed
data) and one or more graphs of this processed data. Once again, the design &
clarity of data table(s) is important and the quality of graphs is also very
important. Give careful consideration to the choice of graph style(s) that you
choose to do. Think about doing a scatter plot or perhaps a line graph showing
error bars or any number of other creative graphing styles rather than just a
simple line graph. Remember that demonstrating errors and uncertainties in your
data is also mandatory for the processed data. Make sure that you follow good
standard rules for doing graphs (valid title, axis’ labeled including units, etc.)
Note: Weak experimental design can sometimes limit you to pie graphs
and/or bar graphs; avoid this by good experimental design in which you
have a quantitative independent variable (with well chosen incremental
values) as well as a quantitative dependent variable.
Conclusion & Evaluation
Conclusion - this is a paragraph section in which you get a chance to discuss
the results of your experiment. Start by addressing whether your data seems to
support or refute your hypothesis. This should be discussed and not just stated.
Specifically refer to your graphs to give support to this discussion. Avoid the use
of the word “proof”or “proves” within your conclusion, as your data will not prove
anything.
Limitations of Experimental Design – this paragraph section discusses
how well your experimental design helped answer your experimental question.
What worked well (and why) and what did not work well (and why). This is also
a section in which outlier points could be discussed (if there were any outlier
points) as well as possible reasons for those outlier points. If you did any
statistical tests, what did the results of that test show? If you have error bars on
your graph(s) what do those show?
Suggestions for Improvement - In reference to the limitations given in the
previous subsection, what realistic and useful improvements could be made if
you were to do this investigation again?
Uncertainties in Data
Showing uncertainties in raw data:
Students should be using one or more measuring tools to collect their
raw data. The most common way to present this raw data is by way of a data
table. An acceptable way to give both the unit and instrument precision of that
measuring tool is to list the variable being measured in a column heading and
give both the unit and tool precision as part of that same heading. For example:
Temperature
( +/- 1 0C ) {for a thermometer with 1 degree markings}
or
Distance Travelled
( +/- 0.1 cm ) or ( +/- 1 mm) {for a ruler with smallest increments of 1 mm}
Students should receive training to not report raw data beyond the limit of
the measuring tool being used. Thus, they should also be consistent in the use
of decimals in their data set. If a student is using the metric ruler shown above
with a precision of +/- 0.1 cm, they should not report some measurements such
as 6.1 cm, others as 6.25 cm and still others as 6 cm. The degree of precision of
the instrument should dictate the consistent choice of decimal place. The data
set shown above should read: 6.1 cm, 6.3 cm, and 6.0 cm.
Other forms of uncertainties / errors can be given as bullet points beneath
a data table. For example, if a student is takes a reading ‘late’ it could/should be
noted, if the instrument used is calibrated before using (or not) it could/should be
noted. Note: Outlier points should be given in raw data even if the student is
later going to exclude those points from their processing and analysis.
Showing uncertainties in presentation of processed data:
There are many ways to show that data which has undergone processing
of some type should not be considered ‘exact’. One of the best ways to
represent uncertainties in processed quantitative data is by the use of error bars
within graphs. If you recall, the lower limit of replicates in data collection is five.
This means that students should be attempting at least 5 ‘trials’ or ‘repeats’ for
each data point that is being attempted. One of the advantages to these repeats
is that now a mean can be calculated from the five (or more) data points
generated from each trial. The mean is more trustworthy than any one of the
individual points.
Another advantage is that the student now could decide to calculate the
standard deviation of this set of data. There is currently no requirement that
students use any form of statistical testing, but calculation of standard deviation
is, in itself, a form of representing uncertainty as long as the student understands
that standard deviation is only showing how closely the data set is clustered
around the mean and does not show overarching things like “the data is or is not
valid”.
Here is how a student could now use their five (or more) replicates as
error bars within a graph. The student should be graphing their independent
variable on the “x” axis and dependent variable on the “y” axis. Each plotted
point should only be the means calculated earlier. Two common forms of error
bars are:
1) plot the +/- standard deviation above and below the mean point
2) plot the range of the data (upper limit and lower limit which led to the
mean)
Either system provides a visual display of how closely the data is clustered
around the mean. A data point with a relatively small error bar is data that was
fairly consistent; a data point with a relatively large error bar is data that perhaps
showed little consistency upon collection and thus is perhaps not as ‘trustworthy’.
This makes it much easier to both identify and justify excluding an outlier point.
An error bar that becomes much smaller when excluding a single data collection
point is case in point.
Error bars also give students a chance to discuss one source of
‘weaknesses and limitations’ within their Conclusion and Evaluation section of an
IA lab report. Students should make an attempt to dissect the data and not just
attempt to give an overall pattern. There are many other things they should also
consider as part of this section as well.
If students are going to use one or more statistical tests within their data
processing, training should occur to show students the limitations of what each
statistical test indicates about the data. For example, chi-square analysis can
only show how observed data compares to predicted data and standard deviation
can only show how closely data is clustered around a mean. Students often
accomplish a statistical test and then do not know what to do with the results.
Subscribe to:
Posts (Atom)