I've been doing some
CodeEval challenges. I've kept each in a small structured project. After a few, it was obvious that I needed to be able to run locally with my own test data and expected results, but be able to submit the source code unaltered.
I have a folder for each challenge that contains three files. The source .py file, a small shell script and a file containing test data.
This is what the boiler plate looks like ('balanced smileys' is the name of one of the challenges):
import sys
def balanced_smileys(message):
"""
your code goes here, returning the result
"""
def code_eval(challenge):
with open(sys.argv[1],'r') as filehandle:
for line in filehandle:
candidate_and_expected_result = [value.strip() for value in line.strip().split('| expected result =')]
candidate = candidate_and_expected_result[0]
expected_result = candidate_and_expected_result[1] if len(candidate_and_expected_result)>1 else None
try:
result = challenge(candidate)
print(result)
if expected_result:
assert str(result) == expected_result,'%s : Expected:%s, Got:%s'%(candidate,expected_result, result)
except IndexError:
pass
code_eval(balanced_smileys)
The test data file (balanced_smileys.txt) looks like this:
:(:):)(::)(:() | expected result = YES
:(( | expected result = NO
i am sick today (:() | expected result = YES
(:) | expected result = YES
hacker cup: started :):) | expected result = YES
)( | expected result = NO
The launch script is simply this:
python balanced_smileys.py balanced_smileys.txt
When I make changes, I simply run:
sh balances_smileys.sh
This works well for most challenges and saves a bit of mucking about.
No comments:
Post a Comment