So I was talking to a friend about pumps. In particular, we were talking about membrane-bound pumps (cellular biology). I thought about it for a moment and I could think of no examples of a non-membrane-bound pump. Of course, with my limited knowledge, that by no means indicates that there are none. At first I thought that the whole concept of a non-membrane-bound pump was kind of odd. If the pump didn't connect to the membrane, then what was the purpose? To keep the fluid that it was floating around in circulating? I tried thinking of non biological examples of pumps and then it hit me. Perhaps the most iconic pump of all is not membrane bound. A bike pump is a perfect example of this. It is a unattached pump that attaches to a membrane pore.
This is where my 'brilliant' idea came in. As a thought exercise, I thought of practical applications of a bike pump-like pump in terms of biology. I thought that it would be a great way to deliver drugs. You could use the pump as a way to target specific cells. In essence, a pill would contain two different compounds. One compound would be the pump. This pump would be a custom designed protein with two to three functional parts. The first would be a tissue-specific ligand that would bind to a pore or channel of sufficient size for the drug to pass through. The second part would forcefully open/activate the channel. Finally, the third would be a pump specifically designed for unidirectional transfer of the active drug. The second compound in the pill would, of course be the drug.
It seems like a no brainer when you think about it on the surface and it seemed, at least in my head, to work really well. Then when you delve deeper into it there are a variety of problems to overcome. First would be that making the custom pump would be ridiculously expensive and time consuming, probably better just to make a wide acting drug and let people deal with the side effects. The second problem would be finding a receptor or pore that is ONLY on the tissue you want. Another problem would be finding a way to make sure that you're pump doesn't attach permanently. This might actually solve itself, as some receptors are internalized after activation (which might be a problem in and of itself). Of course the biggest obstacle would be the fact that many drugs act on the receptors themselves, tricking the cell into doing a desired effect, which means that there is no need of a delivery system. Sigh so much for these brilliant ideas of mine.
-------------------------------------------------------------------------------------------------------------------------------
Another 'brilliant' idea I had recently was a slot machine dim sum restaurant. I figured that it would be a great way to introduce newbies into the world of dim sum. By leaving the choice to chance, they get an unbiased variety of dishes. Rather than being limited to the dishes that their friends or family want, they can try out more exotic dishes. Of course, this kind of defeats the purpose of dim sum whereby you choose the dishes as they come by but it doesn't have to be used by everybody. In addition to the slot machine dim sum, a rotation-sushi-like delivery system would be nice. It is unfortunate, though that such a delivery system kind of makes the dining experience kind of impersonal due to the fact that the participants are facing the conveyor belt rather than each other. Still, despite the problems, I think it would be a cool thing to do and not necessarily limited to dim sum. I could definitely see something like this used in conjunction with Microsoft Surface in restaurants.
Wednesday, February 23, 2011
Feeling dumb
So as some of you know, I wrote a little script to make my life easier during the process of writing this novel. The purpose of this script was to facilitate multiple versions of my novel. I have a nasty tendency to rewrite parts of my story and I wanted to be able to keep track. This was the script:
#!/bin/bash
IFS="
"
function findfile {
for word in $filetype; do
case $word in
.tex) find /home/ray/ -name *.tex | sort;;
.odt) find /home/ray/ -name *.odt | sort;;
.doc) find /home/ray/ -name *.doc | sort;;
.html) find /home/ray/ -name *.html | sort;;
esac
done
}
function editfile {
for word in $filetype; do
case $word in
.tex) gedit $response;;
.odt) oowriter $response;;
.doc) oowriter $response;;
.html) oowriter $response;;
esac
done
}
function makerevisions {
if [ -d $directory/Revisions/ ];
then
echo "Revisions directory exists"
mkdir $directory/Revisions/$(date +%F-%R)
cp $response $directory/Revisions/$(date +%F-%R)/
else
echo "Revisions directory does not exist"
mkdir $directory/Revisions/
mkdir $directory/Revisions/$(date +%F-%R)
cp $response $directory/Revisions/$(date +%F-%R)/
fi
}
filetype=$(zenity \
--list \
--title="Select filetype" \
--column="Filetype" \
.tex \
.odt \
.doc \
.html)
if [ "$filetype" == "" ]; then
exit 1
fi
response=$(zenity \
--list \
--title="Select file" \
--column="File" \
$(findfile))
filename=${response##/*/}
directory=${response%/*.***}
if [ "$response" == "" ]; then
exit 1
fi
editfile $*
makerevisions
echo $filetype
echo $response
echo $filename
echo $directory
unset IFS
Anyways, to continue my story, I found out that I was being extraordinarily dumb. In many ways I was simply trying to reinvent the wheel. I forgot that there were many tools out there that did the exact same thing. Basically, what I was trying to do was a primitive version control script. I realized relatively recently (to may shame) that I could use things like git and svn to do the exact same thing, except better. So I took a look at a few different tools and I finally decided upon using git. I chose git because, well simply, it appeared to be the easiest to learn and I could use it offline. Now I have a script that I worked so hard on (a symptom of my amateurish scripting abilities) that's, for all intents and purposes, useless.
As a side note, it should be mentioned that git these 'software revision control' tools work particularly well because I am using LaTeX. I am unsure as to whether or not it would track changes just as well if it were regular doc, odt... etc. files.
#!/bin/bash
IFS="
"
function findfile {
for word in $filetype; do
case $word in
.tex) find /home/ray/ -name *.tex | sort;;
.odt) find /home/ray/ -name *.odt | sort;;
.doc) find /home/ray/ -name *.doc | sort;;
.html) find /home/ray/ -name *.html | sort;;
esac
done
}
function editfile {
for word in $filetype; do
case $word in
.tex) gedit $response;;
.odt) oowriter $response;;
.doc) oowriter $response;;
.html) oowriter $response;;
esac
done
}
function makerevisions {
if [ -d $directory/Revisions/ ];
then
echo "Revisions directory exists"
mkdir $directory/Revisions/$(date +%F-%R)
cp $response $directory/Revisions/$(date +%F-%R)/
else
echo "Revisions directory does not exist"
mkdir $directory/Revisions/
mkdir $directory/Revisions/$(date +%F-%R)
cp $response $directory/Revisions/$(date +%F-%R)/
fi
}
filetype=$(zenity \
--list \
--title="Select filetype" \
--column="Filetype" \
.tex \
.odt \
.doc \
.html)
if [ "$filetype" == "" ]; then
exit 1
fi
response=$(zenity \
--list \
--title="Select file" \
--column="File" \
$(findfile))
filename=${response##/*/}
directory=${response%/*.***}
if [ "$response" == "" ]; then
exit 1
fi
editfile $*
makerevisions
echo $filetype
echo $response
echo $filename
echo $directory
unset IFS
Anyways, to continue my story, I found out that I was being extraordinarily dumb. In many ways I was simply trying to reinvent the wheel. I forgot that there were many tools out there that did the exact same thing. Basically, what I was trying to do was a primitive version control script. I realized relatively recently (to may shame) that I could use things like git and svn to do the exact same thing, except better. So I took a look at a few different tools and I finally decided upon using git. I chose git because, well simply, it appeared to be the easiest to learn and I could use it offline. Now I have a script that I worked so hard on (a symptom of my amateurish scripting abilities) that's, for all intents and purposes, useless.
As a side note, it should be mentioned that git these 'software revision control' tools work particularly well because I am using LaTeX. I am unsure as to whether or not it would track changes just as well if it were regular doc, odt... etc. files.
Thursday, October 7, 2010
Choosing a Perspective
When I started this endeavor I thought that it would be nice to write the bulk of the story in first person perspective. This would give the readers a unique look at the world while making sure that the narrator does not know everything. However, I found that my narrator knew too much about the story. There were thing that I, the writer, did not want to disclose but the character narrating had no reason to hide. This, of course, makes writing the part difficult and some of the scenes awkward. Therefore, I am contemplating whether or not to rewrite the sections in third person. While this perspective allows the narrator of the story to know everything, there are no constraints against what may or may not be disclosed.
Wednesday, September 22, 2010
Document conversion tool
So I was looking for a tool to convert between various document formats (ie .tex to .doc, .html, .pdf, .odt or any variation thereof) and I found that it was surprisingly difficult finding a tool to convert to .odt. Converting from pretty much anything to either html or pdf was a breeze, however to other formats was more difficult. To convert from .tex to .html, I chose to use latex2html rather than tex4ht because I found the output to be cleaner. Conversion from .tex to .pdf was quite a bit easier. There were a few options. I could either run latex, dvips, and ps2pdf; or latex and dvipdfm; or just pdflatex. These are the standard options. For some reason, when I used these methods the hyperlink table of contents that I created (via hyperref and \tableofcontents) wouldn't show up. I instead chose to use rubber instead. Rubber is a wrapper for LaTeX and some companion programs. With the '-d' option, I was able to make PDFs out of the .tex files that included they hyperlinked table of contents. I don't really understand why this worked and the other methods didn't but it did. Now was the hard part. Converting the .tex files into .odt or into .doc appeared to be near impossible to do cleanly. The best option I had heard was to convert first into HTML and then load it into Office and then save as the desired formats. I found this to work out extremely well. However, I had intended to automate the whole process of document conversion with a script, so this method was not very good for me especially since neither Microsoft Office nor Openoffice.org had very good command line interfaces. This was when I discovered a program called JODConverter. This is a Java program that utilizes Openoffice to convert from one format to another. While this does mean that I probably could have found a way to use Openoffice directly via command line, who was I to complain when there was a program out there to do it for me =D. In the end I wrote a small BASH script to help me with the conversions.
NOTES:
-written in BASH because I'm using Arch Linux
-uses zenity to provide a GUI but is not really necessary
-while this is tailored to my LaTeX usage it can probably be adapted for anything, although just using JODConverter is probably better if LaTeX is not an issue
#!/bin/bash
cd /home/ray/Documents/Novel/Tex/
response=$(zenity --list --title="Choose File" --column=File \
$(ls --hide=*.pdf --hide=*.odt --hide=*.html --hide=Revisions --hide=Output /home/ray/Documents/Novel/Tex/) )
input=$(zenity --list --title="Choose File Type" --checklist --column=Files --column=Description \
TRUE PDF \
TRUE HTML \
TRUE ODT \
TRUE DOC)
cd /home/ray/Documents/Novel/Tex/
IFS='|' ; for word in $input ; do
case $word in
PDF) rubber -f -s -d /home/ray/Documents/Novel/Tex/$response;;
HTML) latex2html -split 0 -no_navigation -dir /home/ray/Documents/Novel/Tex/ $response
sed s_${response%.*x}.html#_#_ <${response%.*x}.html>${response%.*x}.html.new
rm index.html
rm ${response%.*x}.html
mv ${response%.*x}.html.new ${response%.*x}.html;;
ODT)latex2html -split 0 -no_navigation -dir /home/ray/Documents/Novel/Tex/ $response
sed s_${response%.*x}.html#_#_ <${response%.*x}.html>${response%.*x}.html.new
rm index.html
rm ${response%.*x}.html
mv ${response%.*x}.html.new ${response%.*x}.html
soffice -headless -accept="socket,host=127.0.0.1,port=8100;urp;" -nofirststartwizard &
jodconverter /home/ray/Documents/Novel/Tex/${response%.*x}.html /home/ray/Documents/Novel/Tex/${response%.*x}.odt
pkill soffice;;
DOC)latex2html -split 0 -no_navigation -dir /home/ray/Documents/Novel/Tex/ $response
sed s_${response%.*x}.html#_#_ <${response%.*x}.html>${response%.*x}.html.new
rm index.html
rm ${response%.*x}.html
mv ${response%.*x}.html.new ${response%.*x}.html
soffice -headless -accept="socket,host=127.0.0.1,port=8100;urp;" -nofirststartwizard &
jodconverter /home/ray/Documents/Novel/Tex/${response%.*x}.html /home/ray/Documents/Novel/Tex/${response%.*x}.doc
pkill soffice;;
esac
unset IFS
cd /home/ray/Documents/Novel/Tex/
rm $(ls --hide=*.tex --hide=*.sh --hide=*.html --hide=*.odt --hide=*.pdf --hide=*.doc --hide=Output --hide=Revisions)
mkdir Output/$(date +%F-%R)
cp $(ls --hide=*.tex --hide=Revisions --hide=Output) /home/ray/Documents/Novel/Tex/Output/$(date +%F-%R)/
#
#echo "-------------------- EXTRA STEPS --------------------"
#echo "1. Open HTML with OpenOffice.org Writer"
#echo "2. Add first-line indent"
#echo "3. Save file as Master.odt"
#echo "4. Export Master.odt to GoogleDocs"
#echo "-----------------------------------------------------"
#cd ~
done
NOTES:
-written in BASH because I'm using Arch Linux
-uses zenity to provide a GUI but is not really necessary
-while this is tailored to my LaTeX usage it can probably be adapted for anything, although just using JODConverter is probably better if LaTeX is not an issue
#!/bin/bash
cd /home/ray/Documents/Novel/Tex/
response=$(zenity --list --title="Choose File" --column=File \
$(ls --hide=*.pdf --hide=*.odt --hide=*.html --hide=Revisions --hide=Output /home/ray/Documents/Novel/Tex/) )
input=$(zenity --list --title="Choose File Type" --checklist --column=Files --column=Description \
TRUE PDF \
TRUE HTML \
TRUE ODT \
TRUE DOC)
cd /home/ray/Documents/Novel/Tex/
IFS='|' ; for word in $input ; do
case $word in
PDF) rubber -f -s -d /home/ray/Documents/Novel/Tex/$response;;
HTML) latex2html -split 0 -no_navigation -dir /home/ray/Documents/Novel/Tex/ $response
sed s_${response%.*x}.html#_#_ <${response%.*x}.html>${response%.*x}.html.new
rm index.html
rm ${response%.*x}.html
mv ${response%.*x}.html.new ${response%.*x}.html;;
ODT)latex2html -split 0 -no_navigation -dir /home/ray/Documents/Novel/Tex/ $response
sed s_${response%.*x}.html#_#_ <${response%.*x}.html>${response%.*x}.html.new
rm index.html
rm ${response%.*x}.html
mv ${response%.*x}.html.new ${response%.*x}.html
soffice -headless -accept="socket,host=127.0.0.1,port=8100;urp;" -nofirststartwizard &
jodconverter /home/ray/Documents/Novel/Tex/${response%.*x}.html /home/ray/Documents/Novel/Tex/${response%.*x}.odt
pkill soffice;;
DOC)latex2html -split 0 -no_navigation -dir /home/ray/Documents/Novel/Tex/ $response
sed s_${response%.*x}.html#_#_ <${response%.*x}.html>${response%.*x}.html.new
rm index.html
rm ${response%.*x}.html
mv ${response%.*x}.html.new ${response%.*x}.html
soffice -headless -accept="socket,host=127.0.0.1,port=8100;urp;" -nofirststartwizard &
jodconverter /home/ray/Documents/Novel/Tex/${response%.*x}.html /home/ray/Documents/Novel/Tex/${response%.*x}.doc
pkill soffice;;
esac
unset IFS
cd /home/ray/Documents/Novel/Tex/
rm $(ls --hide=*.tex --hide=*.sh --hide=*.html --hide=*.odt --hide=*.pdf --hide=*.doc --hide=Output --hide=Revisions)
mkdir Output/$(date +%F-%R)
cp $(ls --hide=*.tex --hide=Revisions --hide=Output) /home/ray/Documents/Novel/Tex/Output/$(date +%F-%R)/
#
#echo "-------------------- EXTRA STEPS --------------------"
#echo "1. Open HTML with OpenOffice.org Writer"
#echo "2. Add first-line indent"
#echo "3. Save file as Master.odt"
#echo "4. Export Master.odt to GoogleDocs"
#echo "-----------------------------------------------------"
#cd ~
done
Monday, July 19, 2010
File Hosting
So here's the deal. Blogger is a great place to write stuff down but it's a pretty terrible place to share files that aren't images/videos. If you want any files in which you can edit you can, as always, go to the GoogleDoc linked on the right-hand panel OR you can follow the other link to my website. This website will be a central place in which you can find all files related to the novel as well as other miscellaneous information.
Monday, July 12, 2010
Update
Hello my non-existent readers. I would just like to give everyone an update. I've changed the layout of the blog a bit. I've added a page where you can access the completed story (well completed so far). I will keep the link to the GoogleDocs version of the story in case you wish to download/edit/distribute the story.
Next section of the novel is forthcoming.
Warning: Techy stuff ahead
If you've read my previous post: A Note on Writing Software, you'll know that I am currently using LaTeX to write my story. There are many tools that I use. The three main programs that I use are pdflatex (makes PDFs), oolatex (makes ODT [OpenDocumentText file]), and latex2html (makes a HTML file). The ease in which I can convert into all of these filetypes is one reason why I use LaTeX. Recently though, I discovered something. For those of you who know, oolatex is a part of Tex4ht, which is itself capable of making an HTML file via the htlatex command. Now I have found that when copy/pasting the html produced by the htlatex command into Blogspot, the paragraphs get all messed up. This is why I turned to latex2html. The only caveat with latex2html is that the Table of Contents hyperlinks do not work (keyed to file). This problem was quickly solved by a few sed commands replacing part of the HTML code that tied the links to the file. Anyways I wrote a little bash script so that I can easily convert my .tex file into .odt, .pdf, and .html all in one go. Here is said script
#!/bin/bash
cd /home/ray/Documents/Novel/Tex/
sed "s_\\\usepackage_%\\\usepackage_"Master.tex.new
sed "s_\\\hypersetup_%\\\hypersetup_"Master.tex
mk4ht oolatex Master.tex
sed "s_%\\\usepackage_\\\usepackage_"Master.tex.new
sed "s_%\\\hypersetup_\\\hypersetup_"Master.tex
pdflatex Master.tex
#htlatex Master.tex
latex2html -split 0 -no_navigation -dir /home/ray/Documents/Novel/Tex/ Master.tex
sed 's_Master.html#_#_'Master.html.new
rm index.html
rm Master.html
mv Master.html.new Master.html
rm $(ls --hide=*.tex --hide=*.sh --hide=*.html --hide=*.odt --hide=*.pdf)
cd ~
Now there's probably an easier way to do all of this, and I welcome suggestions, but this is the one I've settled upon for now.
Next section of the novel is forthcoming.
Warning: Techy stuff ahead
If you've read my previous post: A Note on Writing Software, you'll know that I am currently using LaTeX to write my story. There are many tools that I use. The three main programs that I use are pdflatex (makes PDFs), oolatex (makes ODT [OpenDocumentText file]), and latex2html (makes a HTML file). The ease in which I can convert into all of these filetypes is one reason why I use LaTeX. Recently though, I discovered something. For those of you who know, oolatex is a part of Tex4ht, which is itself capable of making an HTML file via the htlatex command. Now I have found that when copy/pasting the html produced by the htlatex command into Blogspot, the paragraphs get all messed up. This is why I turned to latex2html. The only caveat with latex2html is that the Table of Contents hyperlinks do not work (keyed to file). This problem was quickly solved by a few sed commands replacing part of the HTML code that tied the links to the file. Anyways I wrote a little bash script so that I can easily convert my .tex file into .odt, .pdf, and .html all in one go. Here is said script
#!/bin/bash
cd /home/ray/Documents/Novel/Tex/
sed "s_\\\usepackage_%\\\usepackage_"
sed "s_\\\hypersetup_%\\\hypersetup_"
mk4ht oolatex Master.tex
sed "s_%\\\usepackage_\\\usepackage_"
sed "s_%\\\hypersetup_\\\hypersetup_"
pdflatex Master.tex
#htlatex Master.tex
latex2html -split 0 -no_navigation -dir /home/ray/Documents/Novel/Tex/ Master.tex
sed 's_Master.html#_#_'
rm index.html
rm Master.html
mv Master.html.new Master.html
rm $(ls --hide=*.tex --hide=*.sh --hide=*.html --hide=*.odt --hide=*.pdf)
cd ~
Now there's probably an easier way to do all of this, and I welcome suggestions, but this is the one I've settled upon for now.
Friday, June 25, 2010
On writing
The more I write the more I learn about writing. Some of you who know me and most of you who do not, know that I'm not the kind of person who meticulously plans out my writing. This means that there are rarely any drafts or flow charts or any such things. My style of writing has primarily been to get it all out there and then perhaps do some editing. Having said that, I should probably note that I've not written anything particularly long. Mostly just stand alone papers etc. I've never written a long thesis or similar works that would require careful sectioning and planning.
I'm finding that the more I write creatively, the more planning I must do. When must I introduce such and such characters and how is it that they should meet? Where do they bump into each other? Questions of setting follow similar routes. Without a modicum of planning the events do not flow or, even worse, clash with each other. I'm forced to do some rudimentary planning. Something I've not done for writing since I was in middle school. Furthermore, planning for creative works is fundamentally different from planning a scholarly article. In articles there is a fairly rigid standard already set into place whereby there is an introduction (sometimes including a rationale), methods, results, and conclusion. In writing a novel, unless one has a definite whole story idea floating around in their mind, none of the sections are concrete. I do not have a concrete version floating around in my head. As a matter of fact, it is mostly just random thoughts floating around waiting to be expressed. The more I write the more I am forced to elucidate and give structure to these random thoughts.
I have also found that my experience in writing scientific articles has altered the way in which I do all of my writing. The very first step for me is research. I try to gather as much information as possible on all the subjects that will be covered. Now one can argue that, being a work of fiction, one is not required to be 100% realistic. However, I find that there is no harm in trying to be as accurate as possible. For example, many works of fantasy fiction feature creatures in the form of fire-breathing dragons of various sizes. To me, explanations, even short ones, about how these dragons are what they are are always a good thing. That is unless the explanation happens to be, "It's magic.". These explanations can also be used to evaluate the authors knowledge and creativity. One author I've read explained that the dragons would consume various substances, hold and process them in their stomachs and would, essentially belch a pyrophoric substance (McCaffrey, 1983). In another, the dragons were crystalline in nature (defies belief in areas of sentience and motility) and the flames were concentrated sunlight (Furey, 1994). In a recent movie the dragons breathed fire through the release of a flammable gas that was ignited by a spark (not improbable, pistol shrimp have been known to generate temperatures of ~5000K). Each of these reflects upon the style and care that the authors put into their work. Now back to the main point. Creating a new world for your story to take place is difficult. While certain aspects of the story can be written off as 'magic' or 'science fiction' other should remain accurate. One cannot create a world where the very laws of the universe do not apply. The audience must have something in the setting in which to relate (ex. gravity cannot suddenly push things away from each other). So you do research. I found rather early on that a quick look would not be sufficient. There was so much to learn. When creating a land, a map of the setting one has to take into consideration the geography and the effect it has on the weather. The weather then has an effect on the industries which you can produce. One has to look at human behaviour and how they settle. If one were to create a new organism one must also create the anatomy and physiology that goes along with it. Most of these can be borrowed from organisms here on earth. The size and shape of an organism greatly affect the complexity. So if one were to make a man-sized organism one has to consider things like how an open circulation would be ineffective at nutrient transport. There are many more examples but I won't bore you with them (and perhaps spoil the surprises of the story). Long story short, I've probably done more research on this novel/story/whatever than on any single other project that I've done before spreading across disciplines (geography, climatology, evolution... etc.).
1. Furey, Maggie. 1994. Aurian. Orbit Books. ISBN: 0-09-927071-4
2. McCaffrey, Anne. 1983. Moreta: Dragonlady of Pern. Del Rey / Ballantine. ISBN:0-345-29874-8
I'm finding that the more I write creatively, the more planning I must do. When must I introduce such and such characters and how is it that they should meet? Where do they bump into each other? Questions of setting follow similar routes. Without a modicum of planning the events do not flow or, even worse, clash with each other. I'm forced to do some rudimentary planning. Something I've not done for writing since I was in middle school. Furthermore, planning for creative works is fundamentally different from planning a scholarly article. In articles there is a fairly rigid standard already set into place whereby there is an introduction (sometimes including a rationale), methods, results, and conclusion. In writing a novel, unless one has a definite whole story idea floating around in their mind, none of the sections are concrete. I do not have a concrete version floating around in my head. As a matter of fact, it is mostly just random thoughts floating around waiting to be expressed. The more I write the more I am forced to elucidate and give structure to these random thoughts.
I have also found that my experience in writing scientific articles has altered the way in which I do all of my writing. The very first step for me is research. I try to gather as much information as possible on all the subjects that will be covered. Now one can argue that, being a work of fiction, one is not required to be 100% realistic. However, I find that there is no harm in trying to be as accurate as possible. For example, many works of fantasy fiction feature creatures in the form of fire-breathing dragons of various sizes. To me, explanations, even short ones, about how these dragons are what they are are always a good thing. That is unless the explanation happens to be, "It's magic.". These explanations can also be used to evaluate the authors knowledge and creativity. One author I've read explained that the dragons would consume various substances, hold and process them in their stomachs and would, essentially belch a pyrophoric substance (McCaffrey, 1983). In another, the dragons were crystalline in nature (defies belief in areas of sentience and motility) and the flames were concentrated sunlight (Furey, 1994). In a recent movie the dragons breathed fire through the release of a flammable gas that was ignited by a spark (not improbable, pistol shrimp have been known to generate temperatures of ~5000K). Each of these reflects upon the style and care that the authors put into their work. Now back to the main point. Creating a new world for your story to take place is difficult. While certain aspects of the story can be written off as 'magic' or 'science fiction' other should remain accurate. One cannot create a world where the very laws of the universe do not apply. The audience must have something in the setting in which to relate (ex. gravity cannot suddenly push things away from each other). So you do research. I found rather early on that a quick look would not be sufficient. There was so much to learn. When creating a land, a map of the setting one has to take into consideration the geography and the effect it has on the weather. The weather then has an effect on the industries which you can produce. One has to look at human behaviour and how they settle. If one were to create a new organism one must also create the anatomy and physiology that goes along with it. Most of these can be borrowed from organisms here on earth. The size and shape of an organism greatly affect the complexity. So if one were to make a man-sized organism one has to consider things like how an open circulation would be ineffective at nutrient transport. There are many more examples but I won't bore you with them (and perhaps spoil the surprises of the story). Long story short, I've probably done more research on this novel/story/whatever than on any single other project that I've done before spreading across disciplines (geography, climatology, evolution... etc.).
1. Furey, Maggie. 1994. Aurian. Orbit Books. ISBN: 0-09-927071-4
2. McCaffrey, Anne. 1983. Moreta: Dragonlady of Pern. Del Rey / Ballantine. ISBN:0-345-29874-8
Subscribe to:
Posts (Atom)